• Big Data Architect

    Job Location(s) US-CA-Pleasanton | US-CA-Pleasanton
    Posted Date 4 days ago(2/11/2019 3:00 PM)
    Job ID
    2018-5236
    # of Openings
    1
    Category
    Technology Experts - Technical Consultant
  • Overview

    Perficient

     

    At Perficient you’ll deliver mission-critical technology and business solutions to Fortune 500 companies and some of the most recognized brands on the planet. And you’ll do it with cutting-edge technologies, thanks to our close partnerships with the world’s biggest vendors. Our network of offices across North America, as well as locations in India and China, will give you the opportunity to spread your wings, too.

     

    We’re proud to be publicly recognized as a “Top Workplace” year after year. This is due, in no small part, to our entrepreneurial attitude and collaborative spirit that sets us apart and keeps our colleagues impassioned, driven, and fulfilled.

     

    Perficient currently has a career opportunity for a Java 8 / Spark Developer for our Pleasanton, CA office. 

     

    As a Big Data Developer, you will be working with one of our marquee client who are industry leaders in healthcare. They are in business to implement a system of intelligence including Big Data, Analytics, and Data science with complete control, automation, and eventually AI. This is an exciting journey with lots of complex processes and huge learning curve.

     

    The ideal candidate will have solid development experience with the desire and passion to learn big data technologies. Will provide training for the ideal candidate to learn Spark development.

     Responsibilities

    • Participate in technical planning & requirements gathering phases including design, coding, testing, troubleshooting, and documenting big data-oriented software applications.
    • Architect, design, and build Azure and Open source framework for ingestion, transformation, improvement, cleaning, and manipulation of data in the business’s operational and analytics databases, and troubleshoots any existent issues.
    • Build "framework" for data ingestion and Dev Ops including technologies such as Kafka, Hive, HBase, Flume, Sqoop, and Azure data factory
    • Design, build and launch extremely efficient & reliable data pipelines to move data (both large and small amounts) into a Data lake in Azure
    • Build, implement and support the data infrastructure; ingest and transform data (ETL/ELT process).
    • Implements, troubleshoots, and optimizes distributed solutions based on modern big data technologies like Hive, Hadoop, Spark, Elastic Search, Storm, Kafka, etc. in both an on premise and cloud deployment model to solve large scale processing problems.
    • Define and build large-scale near real-time streaming data processing pipelines that will enable faster, better, data-informed decision making within the business.
    • Work inside the team of industry experts on the cutting edge Big Data technologies to develop solutions for deployment at massive scale.
    • Interact with business and data scientists and help them understand the concept of self-service data and analytics
    • Designs and plans BI, and other Visualization Tools capturing and analyzing data from multiple sources to make data-driven decisions, as well as debugs, monitors, and troubleshoots solutions.

    Keep up with industry trends and best practices, advising senior management on new and improved data engineering strategies that will drive departmental performance leading to improvement in overall improvement in data governance across the business, promoting informed decision-making, and ultimately improving overall business performance.

     

     

    Responsibilities

    • Participate in technical planning & requirements gathering phases including design, coding, testing, troubleshooting, and documenting big data-oriented software applications.
    • Architect, design, and build Azure and Open source framework for ingestion, transformation, improvement, cleaning, and manipulation of data in the business’s operational and analytics databases, and troubleshoots any existent issues.
    • Build "framework" for data ingestion and Dev Ops including technologies such as Kafka, Hive, HBase, Flume, Sqoop, and Azure data factory
    • Design, build and launch extremely efficient & reliable data pipelines to move data (both large and small amounts) into a Data lake in Azure
    • Build, implement and support the data infrastructure; ingest and transform data (ETL/ELT process).
    • Implements, troubleshoots, and optimizes distributed solutions based on modern big data technologies like Hive, Hadoop, Spark, Elastic Search, Storm, Kafka, etc. in both an on premise and cloud deployment model to solve large scale processing problems.
    • Define and build large-scale near real-time streaming data processing pipelines that will enable faster, better, data-informed decision making within the business.
    • Work inside the team of industry experts on the cutting edge Big Data technologies to develop solutions for deployment at massive scale.
    • Interact with business and data scientists and help them understand the concept of self-service data and analytics
    • Designs and plans BI, and other Visualization Tools capturing and analyzing data from multiple sources to make data-driven decisions, as well as debugs, monitors, and troubleshoots solutions.

    Qualifications

     

     Required Qualifications:

    • 5+ years of consulting experience in big 10 consulting firms
    • At least 10+ years of experience in designing, architecting and implementing large scale data processing/data storage/data distribution systems
    • At least 5+ years real time and streaming experience in Azure based solutions (HD Insights, Hive, Hbase, Kafka, Spark, J2EE)
    • Most recent experience on working with large scale Azure project with an emphasis on building a large scale Data platform
    • At least 3+ years of building framework including ingestion, transformation, Dev ops, and automation
    • At least 5+ years of Hands-on administration, configuration management, monitoring, performance tuning of Hadoop/Distributed platforms (preferably Azure)
    • At least 5+ years of demonstrated experience at least in the most recent 2+ years of designing and delivering solutions using Cortana Intelligence suite of analytics services part of Microsoft Azure including Azure Machine Learning Studio, HDInsight, Polybase, Azure Data Lake Analytics, Azure Data Warehouse, Streaming Analytics, Data Catalog, R/R Studio
    • At least 3+ years of experience in migrating large volumes of data using standard Azure automation tools from on premise and cloud infrastructure to Azure
    • Ability to produce high quality work products under pressure and within deadlines with specific references
    • VERY strong communication, solutioning, and client facing skills especially non-technical business users
    • Recent project on solving a complex BI problem with a BI tool and ability to articulate that story
    • At least 5+ years of working with large multi-vendor environment with multiple teams and people as a part of the project
    • At least 2+ year of working with Power BI
    • At least 5+ years of working with a complex Big Data environment using Microsoft tools
    • 5+ years of experience with Team Foundation Server/JIRA/GitHub and other code management toolsets
    •  
    • 8+ years’ experience working in Big Data HDFS (Hadoop, Hive, Pig, Storm, NoSQL, HBase, Cassandra, Druid)
    • Hands-on experience with DevOps solutions like: Puppet, AWS CloudFormation, Docker and Microservices.
    • Experience with integration of data from multiple data sources
    • Intelligence & Visualization Space (Tableau, Power BI, Qlik, Cognos, SAP BOBJ, etc.)
    • Passion for working with open-source technologies as well as commercial platforms 

    Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues great benefits are just part of what makes Perficient a great place to work. 

     

    More About Perficient

     

    Perficient is the leading digital transformation consulting firm serving Global 2000 and enterprise customers throughout North America. With unparalleled information technology, management consulting and creative capabilities, Perficient and its Perficient Digital agency deliver vision, execution and value with outstanding digital experience, business optimization and industry solutions.

     

    Our work enables clients to improve productivity and competitiveness; grow and strengthen relationships with customers, suppliers and partners; and reduce costs.  Perficient's professionals serve clients from a network of offices across North America and offshore locations in India and China. Traded on the Nasdaq Global Select Market, Perficient is a member of the Russell 2000 index and the S&P SmallCap 600 index.

     

    Perficient is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national, origin, disability status, protected veteran status, or any other characteristic protected by law.

     

    Disclaimer:  The above statements are not intended to be a complete statement of job content, rather to act as a guide to the essential functions performed by the employee assigned to this classification.  Management retains the discretion to add or change the duties of the position at any time. 

     

     

    Options

    Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
    Share on your newsfeed