Behind the Scenes Pt. 4: The K8s Bugaloo.

Let me start off by saying how glad I am to be done with this post series.

I knew when I finished the project, I should have just written all four posts, and then timed them out for delayed release.  But I said, “nah, writing blog posts is future-rawkintrevo’s problem and Fuuuuuuug that guy.”  So here I am again.  Trying to remember the important parts of a thing I did over a month ago, when what I really care about at the moment is Star Trek Bots. But unforuntaly I won’t get to write a blog post on that until I haven’t been working on it for a month too (jk, hopefully next week, though I trained that algorithm over a year ago I think).

OK. So let’s do this quick.

Setting up a K8s On IBM Cloud

Since we were using OpenWhisk earlier- I’m just going to assume you have an IBMCloud account.  The bummer is you will now have to give them some money for a K8s cluster. I know it sucks.  I had to give them money too (actually I might have done this on a work account, I forget).  Anyway, you need to give them money for a 3 cluster “real” thing, because the free ones will no allow Istio ingresses, and we are going to be using those like crazy.

Service Installation Script

If you do anything on computers in life, you should really make a script so next time you can do it in a single command line.  Following that them, here’s my (ugly) script.  The short outline is :

  1. Install Flink
  2. Install / Expose Elasticsearch
  3. Install / Expose Kibana
  4. Chill out for a while.
  5. Install / Expose my cheap front end from a prior section.
  6. Setup Ingresses.
  7. Upload the big fat jar file.

Flink / Elasticsearch / Kibana

The Tao of Flink On K8s has long been talked about (like since at least last Flink Forward Berlin) and is outlined nicely here.  The observant reader will notice I even left a little note to myself in the script.  All in all, the Flink + K8s experience was quite pleasant.  There is one little kink I did have to hack around, and I will show you now.

Check out this line.  The short of the long of it was, the jar we made is a verrrry fat boy, and blew out the limit. So we are tweaking this one setting to allow jars of any size to be uploaded.  The “right way” to do this in Flink is to leave the jars in the local lib/ folder, but for <reasons> on K8s, that’s a bad idea.

Elasticsearh, I only deployed single node. I don’t think multi node is supposed to be that much harder, but for this demo I didn’t need it and was busy focusing on my trashy front end design.

Kibana works fine IF ES is running smoothly. If Kibana is giving you a hard time, go check ES.

I’d like to have a moment of silence for all the hard work that went in to making this such an easy thing to do.

kubectl apply -f ...
kubectl expose deployment ...

That’s life now.

My cheap front end and establishing Ingresses.

A little kubectl apply/expose also was all it took to expose my bootleggy website.  There’s probably an entire blog post on just doing that, but again, we’re keeping this one high level. If you’re really interested check out.

  • Make a simple static website, then Docker it up. (Example)
  • Make a yaml that runs the Dockerfile you just made (Example)
  • Make an ingress that points to your exposed service. (Example)

Which is actually a really nice segway into talking about Ingresses.  The idea is you K8s cluster is hidden away from the world, operating in it’s own little universe.  We want to poke a few holes and expose that universe to the outside.

Because I ran out of time, I ended up just using the prepackaged Flink WebUI and Kibana as iFrames on my “website”.  As such, I poked several holes and you can see how I did it here:

Those were hand rolled and have minimum nonsense, so I think they are pretty self explanatory. You give it a service, a port, and a domain host. Then it just sort of works, bc computers are magic.


So literally as I was finishing the last paragraph I got word that my little project has been awarded 3rd place, but there were a lot of people in the competition so it’s not like was 3rd of 3 ( I have a lot of friends who read this blog (only my friends read this blog?), and we tend to cut at each other a lot).

More conclusively though, a lot of times when you’re tinkering like me, its easy to get off on one little thing and not build full end to end systems. Even if you suck at building parts, it helps illustrate the vision.  Imagine you’ve never seen a horse. Then imagine I draw the back of one, and tell you to just imagine what the front is like. You’re going to be like, “WTF?”.  So to tie this back in to Brian Holt’s “Full Stack Developer” tweet, this image is still better than “close your eyes and make believe”.


fullstack - brian holt
Brian Holt, Twitter


I take this even further in my next post.  I made the Star Trek Bot Algorithm over a year ago and had it (poorly) hooked up to twitter. I finally learned some React over the summer and now I have a great way to kill hours and hours of time, welcome to the new Facebook.

At any rate, thanks for playing. Don’t sue me.




Behind the Scenes of “Rawkintrevo’s House of Real Time IoT Analytics, An AIoT platform MVP Demo”

Woo, that’s a title- amirigh!?

It’s got everything- buzzwords, a corresponding YouTube video, a Twitter handle conjugated as a proper noun.


Just go watch the video– I’m not trying to push traffic to YouTube, but it’s a sort of complicated thing and I don’t do a horrible job of explaining it in the video. You know what, I’m just gonna put it in line.

Ok, so Now you’ve see that.  And you’re wondering? How in the heck?!  Well good news- because you’ve stumbled to the behind the scenes portion where I explain how the magic happened.

There’s a lot of magic going on in there, and some you probably already know and some you’ve got no idea. But this is the story of my journey to becoming a full stack programmer.  As it is said in the Tao of Programming:

There once was a Master Programmer who wrote unstructured programs. A novice programmer, seeking to imitate him, also began to write unstructured programs. When the novice asked the Master to evaluate his progress, the Master criticized him for writing unstructured programs, saying, “What is appropriate for the Master is not appropriate for the novice. You must understand Tao before transcending structure.”

I’m not sure if I’m the Master or the Novice- but this program is definitely unstructured AF. So here is a companion guide that maybe you can learn a thing or two / Fork my repo and tell your boss you did all of this yourself.

Table of Contents

Here’s my rough outline of how I’m going to proceed through the various silliness of this project and the code contained in my github repo .

  1. YOU ARE HERE. A sarcastic introduction, including my dataset, WatsonIoT Platform (MQTT). Also we’ll talk about our data source- and how we shimmed it to push into MQTT, but obviously could (should?) do the same thing with Apache Kafka (instead). I’ll also introduce the chart- we might use that as a map as we move along.
  2. In the second post, I’ll talk about my Apache Flink streaming engine- how it picks up a list of REST endpoints and then hits each one of them.  In the comments of this section you will find people telling me why my way was wrong and what I should have done instead.
  3. In this post I’ll talk about my meandering adventures with React.js, and how little I like the Carbon Design System. In my hack-a-thon submission,  I just iFramed up the Flink WebUI and Kibana, but here’s where I would talk about all the cool things I would have made if I had more time / Carbon-React was a usable system.
  4. In the last post I’ll push this all on IBM’s K8s. I work for IBM, and this was a work thing. I don’t have enough experience on any one else’s K8s (aside from microK8s which doesn’t really count) to bad mouth IBM. They do pay me to tell people I work there, so anything to rude in the comments about them will most likely get moderated out. F.u.

Data Source

See and scroll down to Data Source. I’m happy with that description.

As the program is currently, right about here the schema is passed as a string. My plan was to make that an argument so you could submit jobs from the UI.  Suffice to say, if you have some other interesting data source- either update that to be a command line parameter (PRs are accepted) or just change the string to match your data.  I was also going to do something with Schema inference, but my Scala is rusty and I never was great at Java, and tick-tock.

Watson IoT Platform

I work for IBM, specifically Watson IoT, so I can’t say anything bad about WatsonIoT.  It is basically based on MQTT, which is a pub-sub thing IBM wrote in 1999 (which was before Kafka by about 10 years, to be fair).

If you want to see my hack to push data from the Divvy API into Watson IoT Platform, you can see it here. You will probably notice a couple of oddities.  Most notably, that only 3 stations are picked up to transmit data.  This is because the Free account gets shut down after 200MB of data and you have to upgrade to a $2500/mo plan bc IBM doesn’t really understand linear scaling. /shrug. Obviously this could be easily hacked to just use Kafka and update the Flink Source here.

The Architecture Chart

That’s also in the github, so I’m going to say just look at it on

Coming Up Next:

Well, this was just about the easiest blog post I’ve ever written.  Up next, I may do some real work and get to talking about my Flink program which picks up a list of API endpoints every 30 seconds, does some sliding window analytics, and then sends each record and the most recent analytics to each of the end points that were picked up, and how in its way- this gives us dynamic model calling. Also- I’ll talk about the other cool things that could/should be done there that I just didn’t get to. /shrug.

See you, Space Cowboy.