Data Science Society is organizing the first ONLINE #Datathon2018 – a 48-hours challenge for all people passionate about data, willing to experiment with new types of data, and expand their network of connections in the field globally.
The Datathon is one of the initiatives of Data Science Society, happening for the third time, this time fully digital!
The participants will have the chance to work on real- world cases of top companies such as Telenor, Receipt Bank, Ontotext, Kaufland, VMWare, ZenCodeo, and А Data Pro, while working and communicating on an internal platform, supported by the services of the best cloud providers – IBM, Microsoft and Amazon.
NLP,Computer Vision and AI
At the #Datathon2018 are expected many data passionates coming from a variety of backgrounds and interests. Academics and practitioners will have the chance to bring their knowledge in action in three categories of cases – NLP, Artificial Intelligence and Computer Vision. Go out of the theory and see the data from a different perspective while collaborating in a team of like-minded people and learning to deal with unexpected issues regarding the real-world data.
All data scientists, mathematicians, data analytics experts, software engineers and data enthusiasts will have the chance to dive deep in the data and be mentored by internationally renowned experts.
The #Datathon2018 is happening between 9th and 11th of February and the registration is open
Today brings us a very welcome guest post by Zacharias Voulgaris, author of Julia for Data Science. This is an excellent new book about the Julia language. By reading it you will learn about:
IDEs for using Julia
Basics of the Julia language
Accessing and exploring data
Advanced data science techniques with Julia (cross-validation, clustering, PCA, and more)
The book has a nice flow for someone starting out with Julia and the topics are well explained. Enjoy the post, and hopefully you get a chance to check out the book.
Introducing Julia for Data Science (Technics Publications), a Great Resource for Anyone Interested in Data Science.
Over the past couple of years, there have been several books on the Julia language, a relatively new and versatile tool for computationally-heavy applications. Julia has been adopted extensively by the scientific community as it provided a great alternative to MATLAB and R, while its high-level programming style made it easy for people who were not adept programmers. Also, lately it has attracted the attention of computer science professionals (including Python programmers) as well as data scientists. These people who were already very effective coders, decided to learn this language as well, since it provided undeniable benefits in terms of performance and rapid prototype development, esp. when it came to numeric applications. In addition, the fact that Julia was and is still being developed by a few top MIT graduates goes on to show that this is not a novelty doomed to fade away soon, but instead it is a serious effort that’s bound to linger for many years to come.
However, this post is not about Julia per se, since there are many other people who have made its many merits known to the world since the language was first released in 2012. Instead, we aim to talk about the lesser-known aspects of the language, namely its abundant applications in the fascinating field of data science. Although there are already some reliable resources out there pinpointing the fact that Julia is undoubtedly ready for data science, this book is the first and most complete resource on this topic. Without assuming any prior knowledge of the language, it guides you step-by-step to the mastery of the Julia essentials, helping you get comfortable enough to use it for a variety data science applications. It may not make you an expert in the language, but data scientists rarely care about the esoteric aspects of the programming tools they use, since this level of know-how is not required for getting stuff done. However, the reader is given enough information to be able to investigate those aspects on his own.
The Julia for Data Science book has been in development for about a year and is heavily focused on the applications part, with lots of code snippets, examples, and even questions and exercises, in every chapter. Also, it makes use of a couple of datasets that closely resemble the real-world ones that data scientists encounter in their everyday work. On top of that, it provides you with some theory on the data science process (there is a whole chapter of it dedicated to this, although other books usually devote a couple of pages to it). Although the book is not a complete guide to data science, it provides you with enough information to have a sense of perspective and understand how everything fits together. It is by no means a recipe book, though you can use it as reference one, once you have finished reading it.
The Julia for Data Science book is available at the publisher’s website, as well as on Amazon, in both paperback and eBook formats. We encourage you to give it a read and experience first-hand how Julia can enrich your data science toolbox!
The differences between Data Scientists, Data Engineers, and Software engineers can get a little confusing at times. Thus, here is a guest post provided by Jake Stein, CEO at Stitch formerly RJ Metrics, which aims to clear up some of that confusion based upon LinkedIn data.
As data grows, so does the expertise needed to manage it. The past few years have seen an increasing distinction between the key roles tasked with managing data: software engineers, data engineers, and data scientists.
More and more we’re seeing data engineers emerge as a subset within the software engineering discipline, but this is still a relatively new trend. Plenty of software engineers are still tasked with moving and managing data.
Our team has released two reports over the past year, one focused on understanding the data science role, one on data engineering. Both of these reports are based on self-reported LinkedIn data. In this post, I’ll lay out the distinctions between these roles and software engineers, but first, here’s a diagram to show you (in very broad strokes) what we saw in the skills breakdown between these three roles:
A software engineer builds applications and systems. Developers will be involved through all stages of this process from design, to writing code, to testing and review. They are creating the products that create the data. Software engineering is the oldest of these three roles, and has established methodologies and tool sets.
Frontend and backend development
Operating system development
A data engineer builds systems that consolidate, store, and retrieve data from the various applications and systems created by software engineers. Data engineering emerged as a niche skill set within software engineering. 40% of all data engineers were previously working as a software engineer, making this the most common career path for data engineers by far.
Advanced data structures
Knowledge of new & emerging tools: Hadoop, Spark, Kafka, Hive, etc.
Building ETL/data pipelines
A data scientist builds analysis on top of data. This may come in the form of a one-off analysis for a team trying to better understand customer behavior, or a machine learning algorithm that is then implemented into the code base by software engineers and data engineers.
Business Intelligence dashboards
Evolving Data Teams
These roles are still evolving. The process of ETL is getting much easier overall as new tools (like Stitch) enter the market, making it easy for software developers to set up and maintain data pipelines. Larger companies are pulling data engineers off the software engineering team entirely in lieu of forming a centralized data team where infrastructure and analysis sit together. In some scenarios data scientists are responsible for both data consolidation and analysis.
At this point, there is no single dominant path. But we expect this rapid evolution to continue, after all, data certainly isn’t getting any smaller.
Today, I am proud to welcome a guest post by Claire Gilbert, Data Analyst at Gongos. For more on Gongos, see the description at the end of the post.
It’s fair to say that for those who run in business intelligence circles, many admire the work of Fast Forward Labs CEO and Founder Hilary Mason. Perhaps what resonates most with her fans is the moniker she places on data scientists as being ‘awesome nerds’—those who embody the perfect skillsets of math and stats, coding, and communication. She asserts that these individuals have the technical expertise to not only conduct the really, really complex work—but also have the ability to explain the impact of that work to a non-technical audience.
As insights and analytics organizations strive to assemble their own group of ‘awesome nerds,’ there are two ways to consider Hilary’s depiction. Most organizations struggle by taking the first route—searching for those very expensive, highly rare unicorns—individuals that independently sit at this critical intersection of genius. Besides the fact that it would be even more expensive to clone these data scientists, there is simply not enough bandwidth in their day to fulfill on their awesomeness 24/7.
To quote Aristotle, one of the earliest scientists of our time, “the whole is greater than the sum of its parts,” which brings us to the notion of the team. Rather than seeking out those highly sought-after individuals with skills in all three camps, consider creating a collective of individuals with skills from each camp. After all, no one person can solve for the depth and breadth of an organization’s growing data science needs. It takes a specialist such as a mathematician to dive deep; as well as a multidisciplinary mind who can comprehend the breadth, to truly achieve the perfect team.
Team Dynamics of the Data Kind
The ultimate charge for any data science team is to be a problem-solving machine—one that constantly churns in an ever-changing climate. Faced with an increasing abundance of data, which in turn gives rise to once-unanswerable business questions, has led clients to expect new levels of complexity in insights. This chain reaction brings with it a unique set of challenges not previously met by a prescribed methodology. As the sets of inputs become more diverse, so too should the skillsets to answer them. While all three characteristics of the ‘awesome nerd’ are indispensable, it’s the collective of ‘nerds’ that will become the driving force in today’s data world.
True to the construct, no two pieces should operate independent of the third. Furthermore, finding and honing balance within a data science team will result in the highest degree of accuracy and relevancy possible.
Let’s look at the makeup of a perfectly balanced team:
This trained academic builds advanced models based on inputs, while understanding the theory and requirements for the results to be leveraged correctly.
This hands-on ‘architect’ is in charge of cleaning, managing and reshaping data, as well as building simulators or other highly technical tools that result in user-friendly data.
This business ‘translator’ applies an organizational lens to bring previous knowledge to the table in order to connect technical skill sets to client needs.
It’s the interdependence of these skillsets that completes the team and its ability to deliver fully on the promise of data:
A Mathematician/Statistician’s work relies heavily on the Coder/Programmer’s skills. The notion of garbage-in/garbage-out very much applies here. If the Coder hasn’t sourced and managed the data judiciously, the Mathematician cannot build usable models. Both then rely on the knowledge of the Communicator/Content Expert. Even if the data is perfect, and the results statistically correct, the output cannot be activated against unless it is directly relevant to the business challenge. Furthermore, teams out of balance will be faced with hurdles for which they are not adequately prepared, and output that is not adequately delivered.
To Buy or to Build?
In today’s world of high velocity and high volume of data, companies are faced with a choice. Traditional programmers like those who have coded surveys and collected data are currently integrated in the work streams of most insights organizations. However, many of them are not classically trained in math and/or statistics. Likewise, existing quantitative-minded, client-facing talents can be leveraged in the rebuilding of a team. Training either of these existing individuals who have a bent in math and/or stats is possible, yet is a time-intensive process that calls for patience. If organizations value and believe in their existing talent and choose to go this route, it will then point to the gaps that need to be filled—or bought—to build the ‘perfect’ team.
Organizations have long known the value of data, but no matter how large and detailed it gets, without the human dimension, it will fail to live up to its $30 billion valuation by 2019. The interpretation, distillation and curation of all kinds of data by a team in equilibrium will propel this growth and underscore the importance of data science.
Many people think Hilary’s notion of “awesome nerds” applies only to individuals. But in practice, we must realize this kind of market potential, the team must embody the constitution of awesomeness.
As organizations assemble and recruit teams, perhaps their mission statement quite simply should be…
“If you can find the nerds, keep them, but in the absence of an office full of unicorns, create one.”
Gongos, Inc. is a decision intelligence company that partners with Global 1000 corporations to help build the capability and competency in making great consumer-minded decisions. Gongos brings a consultative approach in developing growth strategies propelled by its clients’ insights, analytics, strategy and innovation groups.
Enlisting the multidisciplinary talents of researchers, data scientists and curators, the company fuels a culture of learning both internally and within its clients’ organizations. Gongos also works with clients to develop strategic frameworks to navigate the change required for executional excellence. It serves organizations in the consumer products, financial services, healthcare, lifestyle, retail, and automotive spaces.
This is a guest post from Michael Li of The Data Incubator. The The Data Incubator runs a free eight week data science fellowship to help transition their Fellows from Academia to Industry. This post runs through some of the toolsets you’ll need to know to kickstart your Data Science Career.
If you’re an aspiring data scientist but still processing your data in Excel, you might want to upgrade your toolset. Why? Firstly, while advanced features like Excel Pivot tables can do a lot, they don’t offer nearly the flexibility, control, and power of tools like SQL, or their functional equivalents in Python (Pandas) or R (Dataframes). Also, Excel has low size limits, making it suitable for “small data”, not “big data.”
In this blog entry we’ll talk about SQL. This should cover your “medium data” needs, which we’ll define as the next level of data where the rows do not fit the 1 million row restriction in Excel. SQL stores data in tables, which you can think of as a spreadsheet layout but with more structure. Each row represents a specific record, (e.g. an employee at your company) and each column of a table corresponds to an attribute (e.g. name, department id, salary). Critically, each column must be of the same “type”. Here is a sample of the table Employees:
SQL has many keywords which compose its query language but the ones most relevant to data scientists are SELECT, WHERE, GROUP BY, JOIN. We’ll go through these each individually.
SELECT is the foundational keyword in SQL. SELECT can also filter on columns. For example
SELECT Name, StartYear FROM Employees
The WHERE clause filters the rows. For example
SELECT * FROM Employees WHERE StartYear=2004
Next, the GROUP BY clause allows for combining rows using different functions like COUNT (count) and AVG (average). For example,
SELECT StartYear, COUNT(*) as Num, AVG(Salary) as AvgSalary
GROUP BY StartYear
Finally, the JOIN clause allows us to join in other tables. For example, assume we have a table called Departments:
We could use JOIN to combine the Employees and Departments tables based ON the DepartmentId fields:
SELECT Employees.Name AS EmpName, Departments.DepartmentName AS DepName
FROM Employees JOIN Departments
ON Employees.DepartmentId = Departments.DepartmentId;
The results might look like:
We’ve ignored a lot of details about joins: e.g. there are actually (at least) 4 types of joins, but hopefully this gives you a good picture.
Conclusion and Further Reading
With these basic commands, you can get a lot of basic data processing done. Don’t forget, that you can nest queries and create really complicated joins. It’s a lot more powerful than Excel, and gives you much better control of your data. Of course, there’s a lot more to SQL than what we’ve mentioned and this is only intended to wet your appetite and give you a taste of what you’re missing.
For a tutorial about R’s Dataframes, checkout this page.
And when you’re ready to step it up from “medium data” to “big data”, you should apply for a fellowship at The Data Incubator where we work with current-generation data-processing technologies like MapReduce and Spark!
Today, we are lucky to have Daniel Levine of RJMetrics provide a guest post. RJMetrics created an extensive report detailing The State of Data Science. I asked Daniel to provide some results as they relate to the current education of data scientists.
Recently, RJMetrics released a benchmark report that looked to answer many of the questions people have about today’s data scientists, such as how many data scientists are there, what degrees do they have, and what skills do they posses.
From LinkedIn data on the 11,400 data scientists working now, we can get a much better sense of what types of data scientists companies are hiring, and how senior data scientists differ from their junior counterparts.
While it was typical to see data scientists report multiple degrees, when we looked at the percentages of all distinct bachelor’s, master’s, and doctorate degrees, we found that 42% finished their education with a master’s.
The high number of data scientists that receive graduate degrees (79%) is indicative of the increasing demand for specialists and a desire from data scientist for advanced training.
Additionally, these numbers may indicate that data science is simply attracting highly educated educated individuals because of its sexy and lucrative career path.
So what does this distribution look like as you climb the corporate ladder? You may assume that the higher the position, the more PhDs; but in fact, across Junior, Senior, and Chief Data Scientists, we saw the highest ratio of PhDs to Master’s at the Senior level.
We speculate that the drop from 43% at the Senior level to 35% at the chief level actually reflects how long those individuals have been in the field. In a study by Heirick & Struggles titled, “Understanding Today’s Chief Data Scientist,” they found that chief Data Scientists “average nearly 15 years of post-degree commercial (PDC) experience.” What we’re likely seeing in this data is the “first crop” of Chief Data Scientists who earned this title in the field, not in the classroom.
When we looked at what data scientists studied during their education, we found that besides Business Administration/Management, they were mostly STEM-focused.
We believe that Computer Science is so popular because a data scientist without CS skills is at an extreme disadvantage because they won’t be able to extract the data well enough to properly analyze it. DJ Patil and Hilary Mason, in their book Creating a Data Culture, went as far as to say, “a data scientist who lacks the tools to get data from a database into an analysis package and back out again will become a second-class citizen in the technical organization.”
In analyzing 254,600 records of skills, we found the most popular skills to be more generic than we’d expect. Popular buzz term like “big data” and “hadoop” didn’t crack the top 10, while programming languages like “r” and “python” are extremely popular among data scientists.
When the data was sliced by seniority, we saw a major difference between Junior, Senior, and Chief levels. To make these differences easier to digest, we compared each level to the same common denominator: the average data scientist.
Again, the chief data scientists data is of particular interest. These C-suite professionals are more likely to list skills like “business intelligence,” “analytics,” “leadership,” “strategy,” and “management” among their skills than both junior and senior data scientists; but less likely to list skills on the more technical side, like “python” and “r”.
While it’s true that chief data scientists may be simply emphasizing skills that are more relevant to their position within the company, we also speculate that many chief data scientists assumed these roles by virtue of being in the field longer or having additional qualifications, such as a business degree. Therefore, it is also possible that some chief data scientists never actually learned many of the skills listed by more junior people.
If you’d like more analysis about this data and a more detailed explanation about our methods, you can check out the full State of Data Science.
Have you ever wondered what the deal was behind all the hype of “big data”? Well, so did we. In 2014, data science hit peak popularity, and as graduates with degrees in statistics, business, and computer science from UC Berkeley we found ourselves with a unique skill set that was in high demand. We recognized that as recent graduates, our foundational knowledge was purely theoretical; we lacked industry experience; we also realized that we were not alone in this predicament. And so, we sought out those who could supplement our knowledge, interviewing leaders, experts, and professionals – the giants in our industry. What began as a quest for the reality behind the buzzwords of “big data” and “data science,” The Data Analytics Handbook, quickly turned into our first educational product of our startup Leada (see www.teamleada.com). Thirty plus interviews and four editions later, the handbook has been downloaded over 30,000 times by readers from all over the world In them, you’ll discover whether “big data” is overblown, what skills your portfolio companies should look for when hiring a data scientist, how leading “big data” and analytics companies interview, and which industries will be most impacted by the disruptive power of data science. We hope you enjoy reading these interviews as much as we enjoyed creating them!