Remote Data Mining And Management Job In Data Science And Analytics

Small Task on Python & Spark (PySpark)

Find more Data Mining And Management remote jobs posted recently Worldwide

HI,
I need a small less than 1 hour task to be done using Pyspark.
Add some spark code to existing python code.
About the recuiter
Member since Mar 14, 2020
Ishan Srivstava
from Central Visayas, Philippines

Skills & Expertise Required

Apache Spark 

Open for hiringApply before - Aug 12, 2024

Work from Anywhere

40 hrs / week

Hourly Type

Remote Job

$26.80

Cost

Offer to work on this project closes in 26 days!
Are you interested in this Opportunity?

Looking for help? Checkout our video tutorial
How to search and apply for jobs

How to apply? Do you have more questions about the Job?
See frequently asked questions

Similar Projects

Train me on hadoop ecosystem.

1. Sourcing data from multiple system into different file formats.
2. Ingesting the extracted data into hadoop cluster (csv, xml, json, fixed width etc)
3. Using mapreduce, pig to transform the data and load it into mpp.
4. Stitch all abo...read more

Build out Advance Analytics and BI Platform

I will share my use case and am looking to build out Advance analytics and BI platform thats cost-effective yet viable (capable of working successfully) and scalable with shortliste candidates.

Current Use Case is to collect data from CRM...read more

Converting JSON or Avro files to Parquet

I need to convert JSON, Avro or other row-based format files in S3 into Parquet columnar store formats using an AWS service like EMR or Glue.

I already have code that converts JSON to parquet using Python but the process is very manual, acco...read more

Get paid to review Big Data Engineer (Spark, Python) questions.

We are looking for subject matter experts to review our Big Data Engineer interview screen tests: Spark-Python and related topics. These questions will be used as interview questions in a hiring process and we need to be sure that the problem stateme...read more

MySQL to Hadoop review, design and implementation

Description:
We have a running environment with MySQL db and we are starting to ingest more data than the db can handle, so we are looking for alternatives to architect Hadoop/Spark environment to offload most of that data into a Hadoop cluster.<...read more