Read to the end to learn more about a new study group I will be launching.
In Late January 2019, Microsoft launched 3 new certifications aimed at Data Scientists/Engineers. For a while, Microsoft has been toying with different methods for training and credentials. They launched the Microsoft Professional Program in Data Science back in 2017. While it provides great content, it did not result in either a college diploma or an official Microsoft certification. Now Microsoft is in the process of restructing certifications to be more role-focused. Here are details about the 3 certification of interest to data scientists and data engineers.
Data Scientists do more than build fancy AI and machine learning models. They often times need to get involved with the data acquisition process. It is common for data to be pulled from other databases or even an API. Plus, the models need to be deployed. These tasks fall to the data scientist to solve (unless there is a data engineer willing to help). Recently, I have discovered Azure Functions to be an extremely useful tool for solving these types of tasks.
What are Azure Functions?
Simply stated, Azure Functions are pieces of code that run. More formally stated,
Azure Functions are serverless computing which allows code to run on-demand without the need to manage servers or hardware infrastructure.
This is exactly what a data scientist needs to solve the tasks mentioned above. I, for one, do not enjoy managing servers (hardware or virtual). I have done it before, but I find it time-consuming and tedious. It is just not my thing. Thus, I happily welcome the serverless capabilities of Azure Functions. I just focus on the code and get the task completed.
Because the code does not always need to be running, Azure Functions invoke the code based upon specified triggers. Once the trigger is activated, the code will begin to run. The following list provides some examples of the triggers available.
Triggers for Azure Functions:
Timer – Set a timer to run the Azure Function as often as you like. Timing is specified with a cron expression.
HTTP Rest call – Have some other code fire off an HTTP request to start the Azure Function.
Blob storage – Run the Azure Function whenever a new file is added to a Blob storage account.
Event Hubs – Event Hubs are often used for collecting real-time data, and this integration offers Azure Functions the ability to run when a real-time event occurs.
Others – Cosmos DB, Service Bus, IoT Hub, GitHub are other events which can trigger an Azure Function.
What can Azure Functions Do?
Once you begin to understand the concept, you can quickly see some of the possibilities. Without having to configure servers or virtual machines, the following tasks become much simpler:
Reading and writing data from a database
Interacting with an HTTP endpoint
Automating decisions in real-time
Computing descriptive statistics
Creating your own endpoint for other data scientists to call
Automatically analyzing code after commits
Programming Languages for Azure Functions
Simplify Tasks for Data Science
Next time you have a data science task which requires a little coding, consider using an Azure Function to run the code. It will most likely save you some deployment and configuration time. Then you can quickly get back to optimizing those fancy AI models.
See the video below for a quick demonstration of how to create an Azure function via a web browser (no IDE needed).
The differences between Data Scientists, Data Engineers, and Software engineers can get a little confusing at times. Thus, here is a guest post provided by Jake Stein, CEO at Stitch formerly RJ Metrics, which aims to clear up some of that confusion based upon LinkedIn data.
As data grows, so does the expertise needed to manage it. The past few years have seen an increasing distinction between the key roles tasked with managing data: software engineers, data engineers, and data scientists.
More and more we’re seeing data engineers emerge as a subset within the software engineering discipline, but this is still a relatively new trend. Plenty of software engineers are still tasked with moving and managing data.
Our team has released two reports over the past year, one focused on understanding the data science role, one on data engineering. Both of these reports are based on self-reported LinkedIn data. In this post, I’ll lay out the distinctions between these roles and software engineers, but first, here’s a diagram to show you (in very broad strokes) what we saw in the skills breakdown between these three roles:
A software engineer builds applications and systems. Developers will be involved through all stages of this process from design, to writing code, to testing and review. They are creating the products that create the data. Software engineering is the oldest of these three roles, and has established methodologies and tool sets.
Frontend and backend development
Operating system development
A data engineer builds systems that consolidate, store, and retrieve data from the various applications and systems created by software engineers. Data engineering emerged as a niche skill set within software engineering. 40% of all data engineers were previously working as a software engineer, making this the most common career path for data engineers by far.
Advanced data structures
Knowledge of new & emerging tools: Hadoop, Spark, Kafka, Hive, etc.
Building ETL/data pipelines
A data scientist builds analysis on top of data. This may come in the form of a one-off analysis for a team trying to better understand customer behavior, or a machine learning algorithm that is then implemented into the code base by software engineers and data engineers.
Business Intelligence dashboards
Evolving Data Teams
These roles are still evolving. The process of ETL is getting much easier overall as new tools (like Stitch) enter the market, making it easy for software developers to set up and maintain data pipelines. Larger companies are pulling data engineers off the software engineering team entirely in lieu of forming a centralized data team where infrastructure and analysis sit together. In some scenarios data scientists are responsible for both data consolidation and analysis.
At this point, there is no single dominant path. But we expect this rapid evolution to continue, after all, data certainly isn’t getting any smaller.
As the field of data science continues to grow and mature, it is nice to begin seeing some distinction in the roles of a data scientist. A new job title gaining popularity is the data engineer. In this post, I lay out some of the distinctions between the 2 roles.
A data scientist is responsible for pulling insights from data. It is the data scientists job to pull data, create models, create data products, and tell a story. A data scientist should typically have interactions with customers and/or executives. A data scientist should love scrubbing a dataset for more and more understanding.
The main goal of a data scientist is to produce data products and tell the stories of the data. A data scientist would typically have stronger statistics and presentation skills than a data engineer.
Data Engineering is more focused on the systems that store and retrieve data. A data engineer will be responsible for building and deploying storage systems that can adequately handle the needs. Sometimes the needs are fast real-time incoming data streams. Other times the needs are massive amounts of large video files. Still other times the needs are many many reads of the data.
In other words, a data engineer needs to build systems that can handle the 3 Vs of big data.
The main goal of data engineer is to make sure the data is properly stored and available to the data scientist and others that need access. A data engineer would typically have stronger software engineering and programming skills than a data scientist.
It is too early to tell if these 2 roles will ever have a clear distinction of responsibilities, but it is nice to see a little separation of responsibilities for the mythical all-in-one data scientist. Both of these roles are important to a properly functioning data science team.
Last week, I got the opportunity to spend some time with the team from Insight Data Engineering. They offer a free program that trains people to be data engineers. Then they help those people connect with a job at an impressive company. The program runs a few times a year and consists of 6 intense weeks learning about and working on a data engineering project.
Although the program is free, it does have a highly-selective application process. Once accepted, you can expect the following:
A beautiful office space in sunny Palo Alto, CA
Mentoring from experts in the field
Meet and Greets with some of the biggest names in data science
Introductions to some of the leading data engineering companies
Access to a growing network of program alumni
A bright future as a data engineer
Insight Data Engineering is the same company that has run Insight Data Science, a similar type of program but for scientists instead of engineers, for the past 2 years. That program has 100% placement so far, and I don’t see that number ever changing. The program has an excellent advisory board that is actively involved in the program.