As a not-for-profit organization, Mass General Brigham is committed to supporting patient care, research, teaching, and service to the community by leading innovation across our system. Founded by Brigham and Women's Hospital and Massachusetts General Hospital, Mass General Brigham supports a complete continuum of care including community and specialty hospitals, a managed care organization, a physician network, community health centers, home care and other health-related entities. Several of our hospitals are teaching affiliates of Harvard Medical School, and our system is a national leader in biomedical research.
We're focused on a people-first culture for our system's patients and our professional family. That's why we provide our employees with more ways to achieve their potential. Mass General Brigham is committed to aligning our employees' personal aspirations with projects that match their capabilities and creating a culture that empowers our managers to become trusted mentors. We support each member of our team to own their personal development-and we recognize success at every step.
Our employees use the Mass General Brigham values to govern decisions, actions and behaviors. These values guide how we get our work done: Patients, Affordability, Accountability & Service Commitment, Decisiveness, Innovation & Thoughtful Risk; and how we treat each other: Diversity & Inclusion, Integrity & Respect, Learning, Continuous Improvement & Personal Growth, Teamwork & Collaboration.
GENERAL SUMMARY / OVERVIEW
Through significant investments in Enterprise Data and Digital Health capabilities, Mass General Brigham (MGB) is innovating and transforming the delivery of health care, research and discovery. The MGB Board and System-wide Executive Leadershp Team have made a multi-year substantial investment to build out core digital and data capabilities to support the achievement of MGB's Mission to drive growth and innovation in inpatient, ambulatory, and digital care. We are just at the start of this journey and we are looking for talented individuals to join our team and help drive the realization of our strategy and ultimately be a part of transforming healthcare.
To achieve the expected outcomes from this strategy (improved patient care, quality of care, patient engagement, efficiency) data and analytic teams from across MGB are working together to build out the System-wide Data Ecosystem - leveraging current assets and building new integrated solutions to provide a holistic set of industry leading data and analytic capabilities for the entire system, supporting Artificial Intelligence (AI), Machine Learning (ML), and real-time operationalization of insights. This effort is bringing together innovative teams from Data and Analytics, Research Analytics, Clinical Data Science, and Information Systems to collaboratively build the data and analytic ecosystem. To ensure success of this effort, Partners is standing up a new Data and Analytics Operating Model with a goal of driving a level of standardization for data and analytic operations across the teams involved.
We are looking for a self-motivated Data Engineer to join our data engineering team to design, develop, construct, test and maintain architectures such as EDW, data lake and large scale data processing systems. The ideal candidate is an experienced data pipeline builder who enjoys optimizing data systems and building them from the ground up. The right candidate will be excited by the prospect of optimizing and/or redesigning our data architecture to support the next generation of products and data initiatives.
PRINCIPAL DUTIES AND RESPONSIBILITIES:
• Design, Develop, construct, test and maintain architectures such as EDW, Data Lake, and large-scale data processing systems
• Support big data ecosystem related Tool selection and POC analysis
• Gather and process raw data at scale that meet functional / non-functional business requirements (including writing scripts, REST API calls, SQL Queries, etc.)
• Develop data set processes for data modeling, mining and production
• Integrate new data management technologies (Collibra, Informatica DQ..) and software engineering tools into existing structures
• The candidate will be responsible for participating in building out our existing EDW and our new Data Lake, expanding and optimizing our data ecosystem and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams.
• The ideal candidate is an experienced data pipeline builder who enjoys optimizing data systems and building them from the ground up.
• The Data Engineer will support our Software Developers, Database Architects, Data Analysts and Data Scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.
• They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products.
• Create and maintain optimal data pipeline architecture, assemble large, complex data sets that meet functional / non-functional business requirements on cloud based data platforms (e.g. Azure) and relational data systems (SQL Server, SSIS)
• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, etc.
• Build the data infrastructure required for optimal extraction, transformation, and loading of data from traditional/legacy data sources.
• Work with stakeholders including the Management team, Product owners, and Architecture teams to assist with data-related technical issues and support their data infrastructure needs.
• Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
• While performing the duties of this job, the employee is frequently required to sit; talk; or hear; use hands to finger; handle; or feel; reach with hands and arms. The employee is occasionally required to stand; walk; and stoop; kneel; or crouch. The employee must frequently lift and/or move up to 5 pounds and occasionally lift and/or move up to 20 pounds. Specific vision abilities required by this job include close vision, distance vision and depth perception.
• The work environment characteristics described here are representative of those an employee encounters while performing the essential functions of this job. Normal office working conditions. The noise level in the work environment is quiet to moderate.
•2-3 Years of experience architecting and building Data Lake, Azure Big Data architecture, Enterprise Analytics Solutions, and optimizing 'big data' data pipelines, architectures and data sets.
•Advanced hands-on SQL, USQL, Python, C#, Java, pySpark (2+ of these) knowledge and experience working with relational databases for data querying and retrieval.
•Experience with Design and Architecture of Azure big data frameworks/tools: Azure Data Lake, Azure Data Factory, Azure Data Bricks, Azure ML, SQL Data Warehouse, HDInsight..
•Experience with Design, ETL engineering and Architecture of MS SQL Server, Cosmos DB
•Experience with Design and Architecture of SQL Server data security and Azure security, VM, Vnet
•Experience with building processes supporting data transformation, data structures, metadata, dependency and workload management.
•Experience working with cross-functional teams in a dynamic environment.
•Experience building Big data pipeline with Java and/or Python a plus.
•Strong SQL skills on multiple platform (preferred MPP systems)
•Leading development of Data Lake Architectures from scratch
•Data Modeling tools (e.g. Erwin, Visio)
•2-3 years of Programming experience in Python, and/or Java
•Experience with Continuous integration and deployment
•Strong Unix/Linux skills
•Experience in petabyte scale data environments and integration of data from multiple diverse sources
•Cloud advanced analytics - Azure ML, machine learning, text analysis, NLP
•Healthcare experience, most notably in Clinical data, Epic, Payer data and reference data is a plus but not mandatory
•Expertise in SQL Server a must, Azure Data Lake and relational Data Warehouse platforms preferred
•Demonstrated experience in Azure and Hadoop big data technologies (Cloudera, Hortonworks), Data Lake development is a plus
•Experience with real time data processing and analytics products is a plus
•Experience with Azure Big data technologies (Azure Data Lake, Azure Data Factory, Azure Data Bricks, Azure ML, SQL Data Warehouse, HDInsight..) is preferred
•Large data warehousing environments in at least two database platforms (Oracle, SQL Server, DB2, etc)
•Programming experience in Python, Java, SQL, good to have .Net, C#
•ETL, data processing expertise in Azure (Azure Data Factory, Data Bricks..), Hadoop (map-reduce, spark, sqoop) and SSIS, HealthCatalyst, Informatica
•Familiarity with data governance and data quality principles, ideally experience with data quality tools
•Ability to independently troubleshoot and performance tune in large scale data lake, enterprise systems
•Knowledge of data architecture principles, data lake, data warehousing, agile development, DevOps concepts and methodologies
•Understanding of change management techniques, and the ability to apply them
•Excellent verbal and written communication skills, problem solving and negotiation skills
•Act as an effective, collaborative team member
Mass General Brigham is an Equal Opportunity Employer & by embracing diverse skills, perspectives and ideas, we choose to lead. All qualified applicants will receive consideration for employment without regard to race, color, religious creed, national origin, sex, age, gender identity, disability, sexual orientation, military service, genetic information, and/or other status protected under law.
Posted about 1 hour ago
Posted about 1 hour ago
Posted about 1 hour ago