While there are a growing number of universities that offer undergraduate data science degrees, for one reason or another those programs may not be perfect for everyone interested in data science. So, what do you do if you attend a school that does not offer a data science degree? This is a question frequently asked of me, so I thought I would elaborate on my typical response.
You Cannot Know It All
First off, you will never know all there is to know about data science. The field is vast and contains many sub-fields. Thus, as an undergraduate, a good plan is to learn the fundamentals. Then expand your knowledge/expertise as your education and career continue. Data Science is evolving rapidly and it requires continual learning. Hopefully, this is one of the reasons you are interested in the field.
My Recommended Approach
A good plan is to major in computer science or statistics and minor in the other. If your school doesn’t have either of those major, then take as many of those classes as you can. Next, choose a domain specific area such as business, chemistry, psychology, etc.; and gear your elective classes toward that domain area. This approach will give you a solid base understanding of the statistical and computational underpinnings of data science. You should also be well-prepared to find a job or continue your studies in graduate school.
Also, somewhat related, taking an art class or two might not be a bad idea. Visualization is very important to data science. Understanding color palettes and usage of space on a canvas are concepts that will serve you well. Plus, many people strong in computer science and statistical algorithms are lacking in artistic skills.
Some Enhancements to Your Education
If your location allows, consider attending local meetups. Finally, get involved with whatever projects you can (Kaggle, internships, open source, …).
Do you have any advice for undergraduates looking to study data science? If so, please leave a comment.
Are you and undergraduate with questions? Please ask in the comments below.
I teach data science courses thoughout the US. I enjoying asking attendees why they are in class. I get many good answers, but occassionally I get some funny answers. Here is a story with one of the more humorous answers.
While chatting with an attendee before class, I asked why he chose to attend this class. Here was his answer.
Well, my boss attended a conference and heard a talk on Big Data. Then, he came back to the office and bought hadoop for some of our systems. Next he heard about this training and told me to attend. When preparing to leave, the boss said, “Get me sum ‘dat big data”.
After a slight chuckle from both of us, I mentioned we would talk more about that in class.
While this story is somewhat humorous, it is not all that uncommon. Companies want to start using data science, they often just do not know where to start. If you are looking for a starting point, check out this post, You Want Data Science, Now What?.
Do you have a funny “data science” or “big data” story? If so, please share in the comments.
I am often confronted by people or organizations whom have heard about data science but don’t know where to start. It is a valid concern. Data science is a broad topic with different meanings to different people.
Here are the common questions I hear. Should I hire a data scientist? Should I hire some consultants? Should I build a data science team? There is no perfect answer for those questions because it depends upon your organization and situation. I would like to suggest a different approach. At first, don’t worry about the titles and organizational structure. Worry about the problems you want to solve. First, start out with 2 questions.
1. What is the goal (be specific)?
This question might seem obvious, but it is often overlooked. Don’t start with data science just because you have heard about others using it. A bad goal for data science is: be data-driven to increase profits. While that might be a high-level strategy, it is much too broad. Better goals are:
identify which customers are likely to leave
identify which products a customer might buy next
determine what cities would be best for expansion
find the most profitable type of marketing for your organization
predict if a person will get cancer in the next year
These are examples of specific goals that data science can help to address. Work hard to narrow your goals to something specific. If you can get enough specific goals, then you might be able to increase profits.
2. What action can be taken?
This is very important. All the predictions and fancy data science does you no good if your organization cannot take any action. For example, sticking with the previous examples. Suppose you can predict if a person will get cancer in the next year. What do you do with that information? Do you send the person an email? What if you are wrong? Do people really want to know that? That is a tricky situation to handle and any action you take has an ethics component.
Other situations have simpler actions, such as identifying the products a customer might be next. Common actions might be: sending a coupon, displaying an add, or suggesting the item be added to the cart.
Another factor to consider with the action is cost. How much will it cost to perform some action. In certain businesses, it might be more profitable to attract new customers than retain existing customers. Thus, there is little advantage to identifying which customers are likely leave.
Data science is very exciting, and it has many positives. However, when done with incorrect expectations, it can lead to nowhere but headaches. Thus, before you start building a team or hiring some consultants, make sure you are clear on your goals and actions.
Today brings us a very welcome guest post by Zacharias Voulgaris, author of Julia for Data Science. This is an excellent new book about the Julia language. By reading it you will learn about:
IDEs for using Julia
Basics of the Julia language
Accessing and exploring data
Advanced data science techniques with Julia (cross-validation, clustering, PCA, and more)
The book has a nice flow for someone starting out with Julia and the topics are well explained. Enjoy the post, and hopefully you get a chance to check out the book.
Introducing Julia for Data Science (Technics Publications), a Great Resource for Anyone Interested in Data Science.
Over the past couple of years, there have been several books on the Julia language, a relatively new and versatile tool for computationally-heavy applications. Julia has been adopted extensively by the scientific community as it provided a great alternative to MATLAB and R, while its high-level programming style made it easy for people who were not adept programmers. Also, lately it has attracted the attention of computer science professionals (including Python programmers) as well as data scientists. These people who were already very effective coders, decided to learn this language as well, since it provided undeniable benefits in terms of performance and rapid prototype development, esp. when it came to numeric applications. In addition, the fact that Julia was and is still being developed by a few top MIT graduates goes on to show that this is not a novelty doomed to fade away soon, but instead it is a serious effort that’s bound to linger for many years to come.
However, this post is not about Julia per se, since there are many other people who have made its many merits known to the world since the language was first released in 2012. Instead, we aim to talk about the lesser-known aspects of the language, namely its abundant applications in the fascinating field of data science. Although there are already some reliable resources out there pinpointing the fact that Julia is undoubtedly ready for data science, this book is the first and most complete resource on this topic. Without assuming any prior knowledge of the language, it guides you step-by-step to the mastery of the Julia essentials, helping you get comfortable enough to use it for a variety data science applications. It may not make you an expert in the language, but data scientists rarely care about the esoteric aspects of the programming tools they use, since this level of know-how is not required for getting stuff done. However, the reader is given enough information to be able to investigate those aspects on his own.
The Julia for Data Science book has been in development for about a year and is heavily focused on the applications part, with lots of code snippets, examples, and even questions and exercises, in every chapter. Also, it makes use of a couple of datasets that closely resemble the real-world ones that data scientists encounter in their everyday work. On top of that, it provides you with some theory on the data science process (there is a whole chapter of it dedicated to this, although other books usually devote a couple of pages to it). Although the book is not a complete guide to data science, it provides you with enough information to have a sense of perspective and understand how everything fits together. It is by no means a recipe book, though you can use it as reference one, once you have finished reading it.
The Julia for Data Science book is available at the publisher’s website, as well as on Amazon, in both paperback and eBook formats. We encourage you to give it a read and experience first-hand how Julia can enrich your data science toolbox!
This is just a short list of a few books that I have have recently discovered online.
Model-Based Machine Learning – Chapters of this book become available as they are being written. It introduces machine learning via case studies instead of just focusing on the algorithms.
Foundations of Data Science – This is a much more academic-focused book which could be used at the undergraduate or graduate level. It covers many of the topics one would expect: machine learning, streaming, clustering and more.
Today, I am proud to welcome a guest post by Claire Gilbert, Data Analyst at Gongos. For more on Gongos, see the description at the end of the post.
It’s fair to say that for those who run in business intelligence circles, many admire the work of Fast Forward Labs CEO and Founder Hilary Mason. Perhaps what resonates most with her fans is the moniker she places on data scientists as being ‘awesome nerds’—those who embody the perfect skillsets of math and stats, coding, and communication. She asserts that these individuals have the technical expertise to not only conduct the really, really complex work—but also have the ability to explain the impact of that work to a non-technical audience.
As insights and analytics organizations strive to assemble their own group of ‘awesome nerds,’ there are two ways to consider Hilary’s depiction. Most organizations struggle by taking the first route—searching for those very expensive, highly rare unicorns—individuals that independently sit at this critical intersection of genius. Besides the fact that it would be even more expensive to clone these data scientists, there is simply not enough bandwidth in their day to fulfill on their awesomeness 24/7.
To quote Aristotle, one of the earliest scientists of our time, “the whole is greater than the sum of its parts,” which brings us to the notion of the team. Rather than seeking out those highly sought-after individuals with skills in all three camps, consider creating a collective of individuals with skills from each camp. After all, no one person can solve for the depth and breadth of an organization’s growing data science needs. It takes a specialist such as a mathematician to dive deep; as well as a multidisciplinary mind who can comprehend the breadth, to truly achieve the perfect team.
Team Dynamics of the Data Kind
The ultimate charge for any data science team is to be a problem-solving machine—one that constantly churns in an ever-changing climate. Faced with an increasing abundance of data, which in turn gives rise to once-unanswerable business questions, has led clients to expect new levels of complexity in insights. This chain reaction brings with it a unique set of challenges not previously met by a prescribed methodology. As the sets of inputs become more diverse, so too should the skillsets to answer them. While all three characteristics of the ‘awesome nerd’ are indispensable, it’s the collective of ‘nerds’ that will become the driving force in today’s data world.
True to the construct, no two pieces should operate independent of the third. Furthermore, finding and honing balance within a data science team will result in the highest degree of accuracy and relevancy possible.
Let’s look at the makeup of a perfectly balanced team:
This trained academic builds advanced models based on inputs, while understanding the theory and requirements for the results to be leveraged correctly.
This hands-on ‘architect’ is in charge of cleaning, managing and reshaping data, as well as building simulators or other highly technical tools that result in user-friendly data.
This business ‘translator’ applies an organizational lens to bring previous knowledge to the table in order to connect technical skill sets to client needs.
It’s the interdependence of these skillsets that completes the team and its ability to deliver fully on the promise of data:
A Mathematician/Statistician’s work relies heavily on the Coder/Programmer’s skills. The notion of garbage-in/garbage-out very much applies here. If the Coder hasn’t sourced and managed the data judiciously, the Mathematician cannot build usable models. Both then rely on the knowledge of the Communicator/Content Expert. Even if the data is perfect, and the results statistically correct, the output cannot be activated against unless it is directly relevant to the business challenge. Furthermore, teams out of balance will be faced with hurdles for which they are not adequately prepared, and output that is not adequately delivered.
To Buy or to Build?
In today’s world of high velocity and high volume of data, companies are faced with a choice. Traditional programmers like those who have coded surveys and collected data are currently integrated in the work streams of most insights organizations. However, many of them are not classically trained in math and/or statistics. Likewise, existing quantitative-minded, client-facing talents can be leveraged in the rebuilding of a team. Training either of these existing individuals who have a bent in math and/or stats is possible, yet is a time-intensive process that calls for patience. If organizations value and believe in their existing talent and choose to go this route, it will then point to the gaps that need to be filled—or bought—to build the ‘perfect’ team.
Organizations have long known the value of data, but no matter how large and detailed it gets, without the human dimension, it will fail to live up to its $30 billion valuation by 2019. The interpretation, distillation and curation of all kinds of data by a team in equilibrium will propel this growth and underscore the importance of data science.
Many people think Hilary’s notion of “awesome nerds” applies only to individuals. But in practice, we must realize this kind of market potential, the team must embody the constitution of awesomeness.
As organizations assemble and recruit teams, perhaps their mission statement quite simply should be…
“If you can find the nerds, keep them, but in the absence of an office full of unicorns, create one.”
Gongos, Inc. is a decision intelligence company that partners with Global 1000 corporations to help build the capability and competency in making great consumer-minded decisions. Gongos brings a consultative approach in developing growth strategies propelled by its clients’ insights, analytics, strategy and innovation groups.
Enlisting the multidisciplinary talents of researchers, data scientists and curators, the company fuels a culture of learning both internally and within its clients’ organizations. Gongos also works with clients to develop strategic frameworks to navigate the change required for executional excellence. It serves organizations in the consumer products, financial services, healthcare, lifestyle, retail, and automotive spaces.
Recently, a number of resources for publicly available datasets have been announced.
Kaggle becomes the place for Open Data – I think this is big news! Kaggle just announced Kaggle Datasets which aims to be a repository for publicly available datasets. This is great for organizations that want to release data, but do not necessarily want the overhead of running an open data portal. Hopefully it will gain some traction and become an exceptional resource for open data.
NASA Opens Research – NASA just announced all research papers funded by NASA will be publicly available. It appears the research articles will all be available at PubMed Central, and the data available at NASA’s Data Portal.
Google Robotics Data – Google continues to do interesting things, and this topic is definitely that. It is a dataset about how robots grasp objects (Google Brain Robot Data). I am not overly familiar with this topic, so if you want to know more, see their blog post, Deep Learning for Robots.
The UK government has taken the first step in providing a solid grounding for the future of data science ethics. Recently, they published a “beta” version of the Data Science Ethical Framework.
The framework is based around 6 clear principles:
Start with clear user need and public benefit
Use data and tools which have the minimum intrusion necessary
Create robust data science models
Be alert to public perceptions
Be as open and accountable as possible
Keep data secure
See the above link for further details. The framework is somewhat specific to the UK, but it would be nice to see other countries/organizations adopt a similar framework. Even DJ Patil, U.S. Chief Data Scientist, has stated the importance of ethics in all data science curriculum.