In the article Do you need a data scientist?, the following questions get answered:
- What data scientist’s do?
- Who makes a good data scientist?
- When is the right time to hire a data scientist?
Hopefully, I will discuss each of these questions in more detail in a later blog post.
To answer the question, if your data is growing you would probably benefit from a data scientist.
The following is a video that goes along with this topic.
It is a great time to be working on a startup in the Big Data arena. First the topics of big data and data science are really popular in the tech world right now. Second, it appears that investors are interested as well. Below are two examples:
Accel Partners formed a $100 million fund for startups that are focused on Big Data. The fund is not limited to storage or analysis. It is really all encompassing. If you have a startup that, in any way, helps people or businesses deal with lots of data, then you are welcome to apply. For more on the fund and how to apply, see the page on the Accel Partners’ website.
IA Ventures has also set up a $105 million dollar fund for startups that are focused on data.
So if you have a “data” startup, now is a great time to get some funding.
A few days ago, I mentioned that the Stanford Machine Learning class will be starting soon. I thought I should quickly mention some of the topics covered. The list also serves as a great outline for machine learning.
In supervised learning, one has a set of data with features and labels.
- Linear Regression – one/multiple variables
- Gradient Descent – a general algorithm for minimizing a function
- Logistic Regression – This is useful when predicting classification type results. For example, are you looking for a yes or no result. Does the patient have cancer? Will the customer buy my new product? It can also be helpful for more than 2 results. What color will a person choose (red, blue, green, silver)?
- Neural Networks – A learning algorithm that is modeled after the brain. Think of neurons.
- Support Vector Machines
In unsupervised learning, one has a set of data with no features and labels. Can some structure be found for the data?
- Clustering – The most popular technique is K-means.
- PCA (Principal Components Analysis) – speed up a learning algorithm
This section covers methods to determine if data is bad. Bad data is considered an anomaly.
Like the name says, recommender systems are used to make recommendations. Companies like Netflix use recommender systems to recommend new movies to customers. LinkedIn also recommends people to connect with. This is a fairly hot topic in the tech world right now.
- Content Based(Features)
- Modified Linear Regression
- Non-content Based(No Features)
- Collaborative Filtering
- Matrix Factorization
If any of these topics sound interesting to you, signup for the Stanford Machine Learning class. Professor Andrew Ng will do an excellent job explaining the details.
Another Big Data startup launches.
Big data startup Skytree emerged from stealth mode on Thursday with its product that is designed to democratize the science of machine learning, while improving significantly on the speed and scale of existing options. Skytree has raised a $1.5 million Series A investment round from Javelin Venture Partners.
Machine learning is a particularly complex approach to big data, and one that has been largely relegated to only the most-advanced companies, such as financial institutions or large web properties. The technique enables systems to get smarter the more data they ingest, which is particularly useful for tasks such as finding hidden patterns or accurately classifying data without human interaction. The libraries and algorithms are out there for anyone to use if they have good enough skills, but deploying a system that can perform the task on large data sets with reasonable performance is the hard part.
That’s the problem Skytree thinks…
View original post 260 more words
Hilary talks about data, datapeople, and the current momentum. She brings up some current challenges.
Challenges with Data
- Robust analysis on streams of data (in volume)
- Store data so that it can be processed in Real-time
- Better Education – Good news for this blog
- Imagination – Stop solving the same problems
- What to do with the data
Last week, Heroku announced a new feature to its PostgreSQL database service. The new feature is called Data Clip, and it allows users to share results of an SQL query. It has options to store the exact data from when the query was originally run or the query can be refreshed to return the current data. I can definitely see this being useful for debugging of code and troubleshooting, which may have been Heroku’s original intent.
I can also see the Data Clip being very useful for data science and quick sharing of relevant data. I doubt the Data clip can handle huge result sets, but huge data is not always necessary. Sometimes, being able to quickly share data results is just as important. Plus the Data Clip allows the results to be downloaded into Excel, csv, json, or yaml formats. Therefore the data can be easily manipulated from there.