Apache Mahout v0.13.0 is out and there are a lot of exciting new features and integration including GPU acceleration, Spark 2.x/Scala 2.10 integration (experimental- full blown in 0.13.1), and a new framework for “precanned algorithms”. In this post we’re going to talk about the new algorithm framework, and how you can contribute to your favorite […]Read more "Introducing Pre-canned Algorithms in Apache Mahout"
Apache Mahout has just released the long awaited 0.13.0 which introduces modular native solvers (e.g. GPU support!). TensorFlow has done a great job driving the conversation around bringing GPU accelerated linear algebra to the masses for implementing custom algorithms, however it has a major draw back that it prefers to manage its own cluster or […]Read more "Lucky Number 0.13.0"
This week we’re going to really show off how easy it is to “roll our own” algorithms in Apache Mahout by looking at Eigenfaces. This algorithm is really easy and fun in Mahout because Mahout comes with a first class distributed stochastic singular value decomposition Mahout SSVD. This is going to be a big job, […]Read more "Deep Magic, Volume 3: Eigenfaces"
In this post we’re going to really show off the coolest (imho) use-case of Apache Mahout – roll your own distributed algorithms. All of these posts are meant for you to follow-along at home, and it is entirely possible, you don’t have access to a large YARN cluster. That’s OK. Short story- they’re free on […]Read more "Deep Magic Volume2: Absurdly Large OLS with Apache Mahout"
I was at Apache Big Data last week and got to talking to some of the good folks at the Apache Mahout project. For those who aren’t familiar, Apache Mahout is a rich Machine Learning and Linear Algebra Library that originally ran on top of Apache Hadoop, and as of recently runs on top of […]Read more "Deep Magic Volume 1: Visualizing Apache Mahout in R via Apache Zeppelin (incubating)"