We couldn’t be more excited to share yet another feature update with you. Starting of today the
ml.yaml file that configures each execution just got a whole lot more powerful as you can now configure inputs as well as script parameters. With these two features in place, we’d like to take you on a journey to demo how you can build the most simple neural style transfer!
Machine Learning is all about building software that learns from data. But how do we get access to that data? And more specifically, how do we get access to that data in MachineLabs from within our execution?
Each execution has full internet access so we can basically do whatever we want to access datasets from any reachable place.
That said, having to write boilerplate code just to download some datasets can be very tiresome and frustrating. After all, MachineLabs is about eliminating common Machine Learning frustration points to make the whole field much more accessible.
Today, we are happy to announce that you can define inputs in the
ml.yaml file which we will be downloaded before the execution code starts, to make them available in an
Each input is specified with an
url and a
name that will be used as the file name. This means, we can download files with conflicting names and define a different name for the actual written file.
We even get nice progress bars for all our concurrent downloads.
Every now and then we’d like allow configurable parameters for our own scripts or execute a third-party script that expects parameters.
Until now scripts that needed parameters could not be used with MachineLabs without rewriting parts of the script itself.
Today we are introducing another upgrade to the
ml.yaml file that enables easy configuration of script parameters.
Parameters are passed to the script in the specified order making it possible to use positional arguments as well as any other values!
Let’s put our two new tools at work and build a neural style transfer without leaving the browser! A neural style transfer is the fancy academic term to describe the process of applying the style of a reference image (typically artistic) to another image. The technique became very popular in 2016 with the initial release of the PRISMA app which enabled users to take any photo from their smartphone and turn it into stunning art.
Today there are several demos for different Machine Learning frameworks that allow us to play around with this technique. We’d like to show you how easy it is to put one of these demos to work using the popular Keras framework with MachineLabs.
First we need an image that we’d like to turn into a stunning piece of art.
We took an image that shows the entire MachineLabs team at our very first company offsite in Spain this summer.
Next, we need a picture that we want to take the style from to apply it on our base image.
This one is a popular painting by Van Gogh but it could be any other picture we like.
The entire magic happens within the
main.py file but we didn’t come up with that. It’s one of the example scripts from Keras and it’s out of the scope of this article to discuss the inner workings of this.
For now, we want to focus on how quickly we can put that demo code to work, apply it on our own photos and tinker with the code. In fact, we believe that MachineLabs offers the simplest possible way to run your own neural style transfer code today.
Assuming you are already participating in our private beta (you should!), it only takes three simple steps to put that code to work.
2. Change the parameters in the
https://blog.thoughtram.io/images/banner/company-offsite-group.jpg to any picture that you’d like to turn into a Van Gogh. Of course, you can also change the value of the first parameter to take a different style reference image if this Van Gogh painting is not your cup of tea.
The third parameter configures the result prefix and should be kept as
outputs/generated_ as it implicitly specifies the path where the generated images are saved. In our previous article we described that
outputs is a special path that we can write to in order to preserve any generated files from our execution.
The script will iterate ten times by default and create an image for each iteration. We only save the last five images though.
3. Hit the
Fork & Run button to fork the lab and start the execution
The execution takes about 8 hours so feel free to get some sleep and look at the results tomorrow. Once you come back you can check out the output tab and download the generated images from there.
This is what our generated images looked like.
We can also find it in the outputs tab of the execution where we can download it right away!
If you apply this demo to your own images, please mention us at @machinelabs_ai on Twitter, so we can help sharing your experiment with the world!
This is really just scratching the surface of what we aim to achieve. We’ll soon ship a bunch of other features that will enable more and more advanced use cases.
Here’s what’s on our roadmap right now:
- Folder support in lab file trees
- Uploads of custom datasets
- REST API to access outputs
- GPU support
- Docs, guides and tutorials on a new dedicated website that will help you getting started
… and much more!
If you want to stay up to date with what we’re working on, also make sure to follow us on twitter.
As always, you can still join our private beta. All you have to do is to visit machinelabs.ai and login with your GitHub account. We will put you in the next batch!
MachineLabs, Inc is a small, dedicated and 100 % bootstrapped company. Our main goal is to provide the community with better tools to move the whole Machine Learning ecosystem forward.
You can make a difference and support us on our mission, become a Patreon!