• Big Data Developer Copy

    Job Location(s) US-CA-Pleasanton | US-CA-Pleasanton
    Posted Date 2 weeks ago(12/6/2018 12:57 PM)
    Job ID
    2018-5236
    # of Openings
    1
    Category
    Technology Experts - Technical Consultant
  • Overview

    Perficient

     

    At Perficient you’ll deliver mission-critical technology and business solutions to Fortune 500 companies and some of the most recognized brands on the planet. And you’ll do it with cutting-edge technologies, thanks to our close partnerships with the world’s biggest vendors. Our network of offices across North America, as well as locations in India and China, will give you the opportunity to spread your wings, too.

     

    We’re proud to be publicly recognized as a “Top Workplace” year after year. This is due, in no small part, to our entrepreneurial attitude and collaborative spirit that sets us apart and keeps our colleagues impassioned, driven, and fulfilled.

     

    Perficient currently has a career opportunity for a Java 8 / Spark Developer for our Pleasanton, CA office. 

     

    As a Big Data Developer, you will be working with clients to implement leading edge data analytics and cloud solutions across a range of industries. We are looking for software engineers with a strong understanding of the full data development lifecycle, including requirements gathering, solution design, development, and production deployment.

     

    The ideal candidate will have solid development experience with the desire and passion to learn big data technologies. Will provide training for the ideal candidate to learn Spark development.

     Responsibilities

    • Participate in technical planning & requirements gathering phases including design, coding, testing, troubleshooting, and documenting big data-oriented software applications. Responsible for the ingestion, maintenance, improvement, cleaning, and manipulation of data in the business’s operational and analytics databases, and troubleshoots any existent issues.
    • Design, build and launch extremely efficient & reliable data pipelines to move data (both large and small amounts) to our data warehouses.
    • Design, build and launch new data extraction, transformation and loading processes in production.
    • Create new systems and tools to enable the customer to consume and understand data faster.
    • Build, implement and support the data infrastructure; ingest and transform data (ETL/ELT process).
    • Implements, troubleshoots, and optimizes distributed solutions based on modern big data technologies like Hive, Hadoop, Spark, Elastic Search, Storm, Kafka, etc. in both an on premise and cloud deployment model to solve large scale processing problems.
    • Define and build large-scale near real-time streaming data processing pipelines that will enable faster, better, data-informed decision making within the business.
    • Work inside the team of industry experts on the cutting edge Big Data technologies to develop solutions for deployment at massive scale.
    • Study data, identify patterns, make sense out of it and convert it to algorithms.
    • Designs and plans BI, and other Visualization Tools capturing and analyzing data from multiple sources to make data-driven decisions, as well as debugs, monitors, and troubleshoots solutions.

    Keep up with industry trends and best practices, advising senior management on new and improved data engineering strategies that will drive departmental performance leading to improvement in overall improvement in data governance across the business, promoting informed decision-making, and ultimately improving overall business performance.

     

     

    Responsibilities

    • As a Big Data Developer, you will be working with clients to implement leading edge data analytics and cloud solutions across a range of industries. We are looking for software engineers with a strong understanding of the full data development lifecycle, including requirements gathering, solution design, development, and production deployment.

       

      The ideal candidate will have solid development experience with the desire and passion to learn big data technologies. Will provide training for the ideal candidate to learn Spark development.

       Responsibilities

      • Participate in technical planning & requirements gathering phases including design, coding, testing, troubleshooting, and documenting big data-oriented software applications. Responsible for the ingestion, maintenance, improvement, cleaning, and manipulation of data in the business’s operational and analytics databases, and troubleshoots any existent issues.
      • Design, build and launch extremely efficient & reliable data pipelines to move data (both large and small amounts) to our data warehouses.
      • Design, build and launch new data extraction, transformation and loading processes in production.
      • Create new systems and tools to enable the customer to consume and understand data faster.
      • Build, implement and support the data infrastructure; ingest and transform data (ETL/ELT process).
      • Implements, troubleshoots, and optimizes distributed solutions based on modern big data technologies like Hive, Hadoop, Spark, Elastic Search, Storm, Kafka, etc. in both an on premise and cloud deployment model to solve large scale processing problems.
      • Define and build large-scale near real-time streaming data processing pipelines that will enable faster, better, data-informed decision making within the business.
      • Work inside the team of industry experts on the cutting edge Big Data technologies to develop solutions for deployment at massive scale.
      • Study data, identify patterns, make sense out of it and convert it to algorithms.
      • Designs and plans BI, and other Visualization Tools capturing and analyzing data from multiple sources to make data-driven decisions, as well as debugs, monitors, and troubleshoots solutions.
      • Keep up with industry trends and best practices, advising senior management on new and improved data engineering strategies that will drive departmental performance leading to improvement in overall improvement in data governance across the business, promoting informed decision-making, and ultimately improving overall business performance

     

     

    Qualifications

     

    •  Required Qualifications:

      • Bachelor’s Degree with a minimum of 3+ year’s relevant experience or equivalent.
      • Minimum of 2 years Java/J2EE software development experience in a complex enterprise system environment, building software from conception to production deployment
      • 1+ years of experience in Java and Spring frame work is mandatory.
      • Experience in Building data pipeline with Java and Spark is plus.
      • 2+ years’ experience with traditional relational databases such as Oracle, SQL Server, PostgreSQL, MySQL.
      • 1+ years’ hands-on Experience with various messaging systems, such as Kafka, Spark data manipulation, pipeline creation.
      • Experience working in an Agile/Scrum environment
      • Need someone who is a self-starter and team player, capable of working with a team of Architects, Developers, Business/Data Analysts, QA, and client stakeholders
      • Proficient understanding of distributed computing principles
      • Strong written and verbal communications

       Preferred Qualifications:

      • 2+ years’ experience working in Big Data (Hadoop, Hive, Pig, Storm, NoSQL, HBase, Cassandra, Druid) preferably in Azure HDInsight.
      • 1+ years’ hands-on experience working with Business Intelligence and Reporting
      • 1+ years’ hands-on experience working within AWS, Azure, Google, or other Cloud Platform based on IaaS and PaaS Solutions.
      • Hands-on experience with DevOps solutions like: Puppet, AWS CloudFormation, Docker and Microservices.
      • Experience with integration of data from multiple data sources
      • Intelligence & Visualization Space (Tableau, Power BI, Qlik, Cognos, SAP BOBJ, etc.)
      • Passion for working with open-source technologies as well as commercial platforms
    •  
    • Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues great benefits are just part of what makes Perficient a great place to work. 

     

    More About Perficient

     

    Perficient is the leading digital transformation consulting firm serving Global 2000 and enterprise customers throughout North America. With unparalleled information technology, management consulting and creative capabilities, Perficient and its Perficient Digital agency deliver vision, execution and value with outstanding digital experience, business optimization and industry solutions.

     

    Our work enables clients to improve productivity and competitiveness; grow and strengthen relationships with customers, suppliers and partners; and reduce costs.  Perficient's professionals serve clients from a network of offices across North America and offshore locations in India and China. Traded on the Nasdaq Global Select Market, Perficient is a member of the Russell 2000 index and the S&P SmallCap 600 index.

     

    Perficient is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national, origin, disability status, protected veteran status, or any other characteristic protected by law.

     

    Disclaimer:  The above statements are not intended to be a complete statement of job content, rather to act as a guide to the essential functions performed by the employee assigned to this classification.  Management retains the discretion to add or change the duties of the position at any time. 

     

     

    Options

    Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
    Share on your newsfeed