Each year in San Jose, California, there is a conference on Graphics Hardware called GTC or “Graphics Technology Conference” and I was there for the 2016 installment (april 4-7). This is my post about what I saw, heard and ate… not necessarily in that order.
The conference spans over four days hosting hundreds of lectures on a wide variety of topics within the area of GPU technology.
The main subjects this year was Artificial Intelligence aka “deep learning” and rendering techniques (VR, real-time render). I was there for the render techniques, so I didn’t get to see anything about deep learning other than when it was applied (see exhibition below).
Deep learning is a rather recent idea (2006) using data to train neural networks using several layers of networks to “dive deeper” so to speak. The reason for it being so popular now is GPU technology. To deduce something from data, you need a lot of data and computing power. GPUs have in recent years become so cheap that running the neural networks on them have recently become feasible and so many papers have been written about the subject as of recently. Deep learning is a fascinating tool which I will explore closer in future posts I’m sure.
There were also a large exhibition area with drones, VR, hardware and software solutions for the guests to try out live. The VR queue was too long so I didn’t try it out. I will write about my own experiences on VR in the near future however.
The exhibition area was huge and filled with all kinds of technology, for proof, see the photos. I will write about the two most interesting software solutions that I came across.
This is a very cool company that specializes in texture synthesis. Artomatix is able to do many operations on a texture using deep learning, for instance: remove seams, remove gradient, make a texture repeatable which looks good from a distance. I will leave you with a video of the tool in action. I will most probably write about Artomatix in the near future.
Have you ever wanted to direct Arnold Schwarzenegger saying “get to da choppa”? Well, now you can. By using deep learning algoritms Face2Face can reanimate a source video using your own face. Sounds complicated, check out the video below
This was my first year at GTC and my first visit to the USA. It was a real experience and I got a hang of the running around and taking notes. Next time I will step it up and hopefully have something to show at the exhibition. Until then.