I am often confronted by people or organizations whom have heard about data science but don’t know where to start. It is a valid concern. Data science is a broad topic with different meanings to different people.
Here are the common questions I hear. Should I hire a data scientist? Should I hire some consultants? Should I build a data science team? There is no perfect answer for those questions because it depends upon your organization and situation. I would like to suggest a different approach. At first, don’t worry about the titles and organizational structure. Worry about the problems you want to solve. First, start out with 2 questions.
1. What is the goal (be specific)?
This question might seem obvious, but it is often overlooked. Don’t start with data science just because you have heard about others using it. A bad goal for data science is: be data-driven to increase profits. While that might be a high-level strategy, it is much too broad. Better goals are:
identify which customers are likely to leave
identify which products a customer might buy next
determine what cities would be best for expansion
find the most profitable type of marketing for your organization
predict if a person will get cancer in the next year
These are examples of specific goals that data science can help to address. Work hard to narrow your goals to something specific. If you can get enough specific goals, then you might be able to increase profits.
2. What action can be taken?
This is very important. All the predictions and fancy data science does you no good if your organization cannot take any action. For example, sticking with the previous examples. Suppose you can predict if a person will get cancer in the next year. What do you do with that information? Do you send the person an email? What if you are wrong? Do people really want to know that? That is a tricky situation to handle and any action you take has an ethics component.
Other situations have simpler actions, such as identifying the products a customer might be next. Common actions might be: sending a coupon, displaying an add, or suggesting the item be added to the cart.
Another factor to consider with the action is cost. How much will it cost to perform some action. In certain businesses, it might be more profitable to attract new customers than retain existing customers. Thus, there is little advantage to identifying which customers are likely leave.
Data science is very exciting, and it has many positives. However, when done with incorrect expectations, it can lead to nowhere but headaches. Thus, before you start building a team or hiring some consultants, make sure you are clear on your goals and actions.
Today brings us a very welcome guest post by Zacharias Voulgaris, author of Julia for Data Science. This is an excellent new book about the Julia language. By reading it you will learn about:
IDEs for using Julia
Basics of the Julia language
Accessing and exploring data
Advanced data science techniques with Julia (cross-validation, clustering, PCA, and more)
The book has a nice flow for someone starting out with Julia and the topics are well explained. Enjoy the post, and hopefully you get a chance to check out the book.
Introducing Julia for Data Science (Technics Publications), a Great Resource for Anyone Interested in Data Science.
Over the past couple of years, there have been several books on the Julia language, a relatively new and versatile tool for computationally-heavy applications. Julia has been adopted extensively by the scientific community as it provided a great alternative to MATLAB and R, while its high-level programming style made it easy for people who were not adept programmers. Also, lately it has attracted the attention of computer science professionals (including Python programmers) as well as data scientists. These people who were already very effective coders, decided to learn this language as well, since it provided undeniable benefits in terms of performance and rapid prototype development, esp. when it came to numeric applications. In addition, the fact that Julia was and is still being developed by a few top MIT graduates goes on to show that this is not a novelty doomed to fade away soon, but instead it is a serious effort that’s bound to linger for many years to come.
However, this post is not about Julia per se, since there are many other people who have made its many merits known to the world since the language was first released in 2012. Instead, we aim to talk about the lesser-known aspects of the language, namely its abundant applications in the fascinating field of data science. Although there are already some reliable resources out there pinpointing the fact that Julia is undoubtedly ready for data science, this book is the first and most complete resource on this topic. Without assuming any prior knowledge of the language, it guides you step-by-step to the mastery of the Julia essentials, helping you get comfortable enough to use it for a variety data science applications. It may not make you an expert in the language, but data scientists rarely care about the esoteric aspects of the programming tools they use, since this level of know-how is not required for getting stuff done. However, the reader is given enough information to be able to investigate those aspects on his own.
The Julia for Data Science book has been in development for about a year and is heavily focused on the applications part, with lots of code snippets, examples, and even questions and exercises, in every chapter. Also, it makes use of a couple of datasets that closely resemble the real-world ones that data scientists encounter in their everyday work. On top of that, it provides you with some theory on the data science process (there is a whole chapter of it dedicated to this, although other books usually devote a couple of pages to it). Although the book is not a complete guide to data science, it provides you with enough information to have a sense of perspective and understand how everything fits together. It is by no means a recipe book, though you can use it as reference one, once you have finished reading it.
The Julia for Data Science book is available at the publisher’s website, as well as on Amazon, in both paperback and eBook formats. We encourage you to give it a read and experience first-hand how Julia can enrich your data science toolbox!
This is just a short list of a few books that I have have recently discovered online.
Model-Based Machine Learning – Chapters of this book become available as they are being written. It introduces machine learning via case studies instead of just focusing on the algorithms.
Foundations of Data Science – This is a much more academic-focused book which could be used at the undergraduate or graduate level. It covers many of the topics one would expect: machine learning, streaming, clustering and more.
Today, I am proud to welcome a guest post by Claire Gilbert, Data Analyst at Gongos. For more on Gongos, see the description at the end of the post.
It’s fair to say that for those who run in business intelligence circles, many admire the work of Fast Forward Labs CEO and Founder Hilary Mason. Perhaps what resonates most with her fans is the moniker she places on data scientists as being ‘awesome nerds’—those who embody the perfect skillsets of math and stats, coding, and communication. She asserts that these individuals have the technical expertise to not only conduct the really, really complex work—but also have the ability to explain the impact of that work to a non-technical audience.
As insights and analytics organizations strive to assemble their own group of ‘awesome nerds,’ there are two ways to consider Hilary’s depiction. Most organizations struggle by taking the first route—searching for those very expensive, highly rare unicorns—individuals that independently sit at this critical intersection of genius. Besides the fact that it would be even more expensive to clone these data scientists, there is simply not enough bandwidth in their day to fulfill on their awesomeness 24/7.
To quote Aristotle, one of the earliest scientists of our time, “the whole is greater than the sum of its parts,” which brings us to the notion of the team. Rather than seeking out those highly sought-after individuals with skills in all three camps, consider creating a collective of individuals with skills from each camp. After all, no one person can solve for the depth and breadth of an organization’s growing data science needs. It takes a specialist such as a mathematician to dive deep; as well as a multidisciplinary mind who can comprehend the breadth, to truly achieve the perfect team.
Team Dynamics of the Data Kind
The ultimate charge for any data science team is to be a problem-solving machine—one that constantly churns in an ever-changing climate. Faced with an increasing abundance of data, which in turn gives rise to once-unanswerable business questions, has led clients to expect new levels of complexity in insights. This chain reaction brings with it a unique set of challenges not previously met by a prescribed methodology. As the sets of inputs become more diverse, so too should the skillsets to answer them. While all three characteristics of the ‘awesome nerd’ are indispensable, it’s the collective of ‘nerds’ that will become the driving force in today’s data world.
True to the construct, no two pieces should operate independent of the third. Furthermore, finding and honing balance within a data science team will result in the highest degree of accuracy and relevancy possible.
Let’s look at the makeup of a perfectly balanced team:
This trained academic builds advanced models based on inputs, while understanding the theory and requirements for the results to be leveraged correctly.
This hands-on ‘architect’ is in charge of cleaning, managing and reshaping data, as well as building simulators or other highly technical tools that result in user-friendly data.
This business ‘translator’ applies an organizational lens to bring previous knowledge to the table in order to connect technical skill sets to client needs.
It’s the interdependence of these skillsets that completes the team and its ability to deliver fully on the promise of data:
A Mathematician/Statistician’s work relies heavily on the Coder/Programmer’s skills. The notion of garbage-in/garbage-out very much applies here. If the Coder hasn’t sourced and managed the data judiciously, the Mathematician cannot build usable models. Both then rely on the knowledge of the Communicator/Content Expert. Even if the data is perfect, and the results statistically correct, the output cannot be activated against unless it is directly relevant to the business challenge. Furthermore, teams out of balance will be faced with hurdles for which they are not adequately prepared, and output that is not adequately delivered.
To Buy or to Build?
In today’s world of high velocity and high volume of data, companies are faced with a choice. Traditional programmers like those who have coded surveys and collected data are currently integrated in the work streams of most insights organizations. However, many of them are not classically trained in math and/or statistics. Likewise, existing quantitative-minded, client-facing talents can be leveraged in the rebuilding of a team. Training either of these existing individuals who have a bent in math and/or stats is possible, yet is a time-intensive process that calls for patience. If organizations value and believe in their existing talent and choose to go this route, it will then point to the gaps that need to be filled—or bought—to build the ‘perfect’ team.
Organizations have long known the value of data, but no matter how large and detailed it gets, without the human dimension, it will fail to live up to its $30 billion valuation by 2019. The interpretation, distillation and curation of all kinds of data by a team in equilibrium will propel this growth and underscore the importance of data science.
Many people think Hilary’s notion of “awesome nerds” applies only to individuals. But in practice, we must realize this kind of market potential, the team must embody the constitution of awesomeness.
As organizations assemble and recruit teams, perhaps their mission statement quite simply should be…
“If you can find the nerds, keep them, but in the absence of an office full of unicorns, create one.”
Gongos, Inc. is a decision intelligence company that partners with Global 1000 corporations to help build the capability and competency in making great consumer-minded decisions. Gongos brings a consultative approach in developing growth strategies propelled by its clients’ insights, analytics, strategy and innovation groups.
Enlisting the multidisciplinary talents of researchers, data scientists and curators, the company fuels a culture of learning both internally and within its clients’ organizations. Gongos also works with clients to develop strategic frameworks to navigate the change required for executional excellence. It serves organizations in the consumer products, financial services, healthcare, lifestyle, retail, and automotive spaces.
Recently, a number of resources for publicly available datasets have been announced.
Kaggle becomes the place for Open Data – I think this is big news! Kaggle just announced Kaggle Datasets which aims to be a repository for publicly available datasets. This is great for organizations that want to release data, but do not necessarily want the overhead of running an open data portal. Hopefully it will gain some traction and become an exceptional resource for open data.
NASA Opens Research – NASA just announced all research papers funded by NASA will be publicly available. It appears the research articles will all be available at PubMed Central, and the data available at NASA’s Data Portal.
Google Robotics Data – Google continues to do interesting things, and this topic is definitely that. It is a dataset about how robots grasp objects (Google Brain Robot Data). I am not overly familiar with this topic, so if you want to know more, see their blog post, Deep Learning for Robots.
The UK government has taken the first step in providing a solid grounding for the future of data science ethics. Recently, they published a “beta” version of the Data Science Ethical Framework.
The framework is based around 6 clear principles:
Start with clear user need and public benefit
Use data and tools which have the minimum intrusion necessary
Create robust data science models
Be alert to public perceptions
Be as open and accountable as possible
Keep data secure
See the above link for further details. The framework is somewhat specific to the UK, but it would be nice to see other countries/organizations adopt a similar framework. Even DJ Patil, U.S. Chief Data Scientist, has stated the importance of ethics in all data science curriculum.
Andrew Ng [Co-Founder of Coursera, Stanford Professor, Chief Scientist at Baidu, and All-Around Machine Learning Expert] is writing a book during the summer of 2016. The book is titled, Machine Learning Yearning. It you visit the site and signup quickly you can get draft copies of the chapters as they become available.
Andrew is an excellent teacher. His MOOCs are wildly successful, and I expect his book to be excellent as well.
This is a guest post from Michael Li of The Data Incubator. The The Data Incubator runs a free eight week data science fellowship to help transition their Fellows from Academia to Industry. This post runs through some of the toolsets you’ll need to know to kickstart your Data Science Career.
If you’re an aspiring data scientist but still processing your data in Excel, you might want to upgrade your toolset. Why? Firstly, while advanced features like Excel Pivot tables can do a lot, they don’t offer nearly the flexibility, control, and power of tools like SQL, or their functional equivalents in Python (Pandas) or R (Dataframes). Also, Excel has low size limits, making it suitable for “small data”, not “big data.”
In this blog entry we’ll talk about SQL. This should cover your “medium data” needs, which we’ll define as the next level of data where the rows do not fit the 1 million row restriction in Excel. SQL stores data in tables, which you can think of as a spreadsheet layout but with more structure. Each row represents a specific record, (e.g. an employee at your company) and each column of a table corresponds to an attribute (e.g. name, department id, salary). Critically, each column must be of the same “type”. Here is a sample of the table Employees:
SQL has many keywords which compose its query language but the ones most relevant to data scientists are SELECT, WHERE, GROUP BY, JOIN. We’ll go through these each individually.
SELECT is the foundational keyword in SQL. SELECT can also filter on columns. For example
SELECT Name, StartYear FROM Employees
The WHERE clause filters the rows. For example
SELECT * FROM Employees WHERE StartYear=2004
Next, the GROUP BY clause allows for combining rows using different functions like COUNT (count) and AVG (average). For example,
SELECT StartYear, COUNT(*) as Num, AVG(Salary) as AvgSalary
GROUP BY StartYear
Finally, the JOIN clause allows us to join in other tables. For example, assume we have a table called Departments:
We could use JOIN to combine the Employees and Departments tables based ON the DepartmentId fields:
SELECT Employees.Name AS EmpName, Departments.DepartmentName AS DepName
FROM Employees JOIN Departments
ON Employees.DepartmentId = Departments.DepartmentId;
The results might look like:
We’ve ignored a lot of details about joins: e.g. there are actually (at least) 4 types of joins, but hopefully this gives you a good picture.
Conclusion and Further Reading
With these basic commands, you can get a lot of basic data processing done. Don’t forget, that you can nest queries and create really complicated joins. It’s a lot more powerful than Excel, and gives you much better control of your data. Of course, there’s a lot more to SQL than what we’ve mentioned and this is only intended to wet your appetite and give you a taste of what you’re missing.
For a tutorial about R’s Dataframes, checkout this page.
And when you’re ready to step it up from “medium data” to “big data”, you should apply for a fellowship at The Data Incubator where we work with current-generation data-processing technologies like MapReduce and Spark!