*Direct Client:*

*Data Platform Engineer (Bigdata and **AWS**)*

*Bethesda, MD*

*12 Months*

*In-Person Interview after a Phone Screen*

*Start Date: ASAP*

*Rate: $80 - 85/hr all Inc*



*Required:*

   - The ideal candidate for our team is a thinker and a doer: someone who
   loves algorithms and mathematical precision, but at the same time enjoys
   implementing real systems, and is motivated by the prospect of doing
   something never done before.
   - You will build data pipeline frameworks to automate high-volume and
   real-time data delivery from various source channels: AWS, GCP, Azure, and
   a multitude of On-Premise data aggregators.
   - Heavy use of AWS services such as: Lambda, Glue, Kafka, Kinesis,
   Elasticsearch, Data Warehouse, Data Lakes, Analytics, Cloud Trail and Cloud
   Watch.
   - You will build data APIs that support critical operational and
   analytical applications.
   - You will transform complex analytical models into scalable,
   production-ready solutions.
   - You will continuously integrate and ship code into our cloud
   Production environments.
   - You will work directly with Product Owners and Developers to deliver
   data products in a collaborative and agile environment.
   - Bring a passion to stay on top of tech trends, experiment with and
   learn new technologies, participate in internal & external technology
   communities, and mentor other members of the engineering community.



*Responsibilities:*

   - Design, build and launch efficient & reliable data pipelines to move
   and transform data (both large and small event driven amounts).
   - Build a platform to run micro ETL batches to process data-sets in near
   real time.
   - Perform Proof of Concepts and choose the right technologies to be
   used.
   - Intelligently design data models for optimal storage and retrieval.
   - Builds robust systems with an eye on the long-term maintenance and
   support of the application.
   - Drive cross team design and influencing development via technical
   leadership and mentoring.



*Basic Qualifications: *

   - Degree in Computer Science, Engineering, Mathematics, Physics, or a
   related field.
   - 3+ years experience with several of the following services building
   modern data pipeline solutions : AWS Lambda, Airflow, Kafka, AWS Kinesis
   Data Streams, Data Warehouse, Data Lakes, Spark, AWS Analytics, AWS Cloud
   Trail and AWS Cloud Watch.
   - 3+ years experience developing software solutions to solve complex
   business problems.
   - 3+ years experience designing, developing, and implementing data
   pipelines and applications to stream and process datasets at low latencies.
   - 3+ years experience developing distributed systems and data
   architecture.  Lambda design and implement batch and stream data processing
   pipelines. Optimize the distribution, partitioning, and MPP of high-level
   data structures.
   - 3+ years with Agile engineering practices



*Preferred Qualifications:*

   - Bachelor’s, Master’s or Ph.D degree in Computer Science or equivalent
   work experience
   - 7+ years professional experience in Infrastructure development of
   multi-threaded, scalable and highly-available distributed systems
   - Infrastructure Implementation and tuning experience in the big data
   Ecosystem (HDP & HDF (Horton), Amazon EMR, Hadoop, Spark, R, Presto, Hive),
   database (MariDB, mysql, postgres, Microsoft SQL Server), NoSQL (Amazon
   DynamoDB, HBase, MongoDB, Couchbase, Cassandra), data warehousing (Netezza,
   dashDB) or data migration and integration.
   - Infrastructure Implementation Experience with Python, Scala or Java
   Experience working in a public cloud environment, particularly AWS
   Familiarity with cloud warehouse tools like Snowflake Experience with
   messaging/streaming/complex event processing tooling and frameworks such as
   Kinesis, Kafka, Spark Streaming, Flink, Nifi, etc.
   - Infrastructure Implementation Experience with AWS technologies like
   Redshift, S3, EC2, Data Pipeline, & EMR
   - Infrastructure Implementation Experience building RESTful API’s to
   enable data consumption Familiarity with build tools such as Terraform or
   CloudFormation and automation tools such as Jenkins or Circle CI
   Familiarity with practices like Continuous Development, Continuous
   Integration and Automated Testing Experience in Agile/Scrum application
   development
   - Infrastructure Implementation Experience In command of setup,
   configuration and security for Hadoop clusters using Kerberos, Ranger and
   Ranger KMS.
   - Infrastructure Implementation Experience in implementing Deep Learning
   frameworks like MXNet, Caffe 2, Tensorflow, Theano, CNTK, and Keras to help
   our customers build DL models.
   - Infrastructure Implementation Experience in implementing SparkML and
   Amazon Machine Learning (AML) to help our customers build ML models.
   - Infrastructure Implementation Experience with Machine Learning and/or
   Artificial Intelligence algorithms and libraries, such as TensorFlow.
   - Infrastructure Implementation Experience in leading teams in code
   development and balancing feature requests with feasibility constraints.



Warm Regards,

Pavan Pisupati | Sr. Recruiter | *AQUINAS**CONSULTING*
<http://www.aquinasconsulting.com/>

203-647-7964 | ppisup...@aquinasconsulting.com | Connect with me on LinkedIn
<http://www.linkedin.com/in/ppisupati>

-- 
You received this message because you are subscribed to the Google Groups 
"CorptoCorp" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to corptocorp+unsubscr...@googlegroups.com.
To post to this group, send email to corptocorp@googlegroups.com.
Visit this group at https://groups.google.com/group/corptocorp.
For more options, visit https://groups.google.com/d/optout.

Reply via email to