This is the narrative of a typical AI Sunday, where I decided to look at building a sequence to sequence (seq2seq) model based chatbot using some already available sample code and data from the Cornell movie database. By Titash Neogi, Chief Architect and Entrepreneur In Residence at kontikilabs As chief architect at Kontiki Labs, I wear two hats – one as a AI researcher looking at new developments in AI and bringing that into the main body of capabilities of our company, as needed.

The second role is an AI evangelist / Product Management role where I work with businesses to understand there needs or problems and suggest the right AI powered solutions for them. Needless to say I am constantly toggling between developer and business roles and looking for workflows to optimise my available dev time.

During business travel, I tend to use my Sundays for some lock-down research and development around ML or AI. This is the narrative of a typical AI Sunday, where I decided to look at building a sequence to sequence (seq2seq) model based chatbot using some already available sample code and data from the Cornell movie database.

Seq2seq models are a type of Recurrent Neural Networks that are very well suited for chatbot and machine translation sort of problems. In this specific instance, my focus was to get the seq2seq model to work by starting the training on the RNN. The larger goal was to figure out the optimal workflow (cost effective and time to train) for a seq2seq model to be trained.

When I started, I had three options – use our own GPU powered desktop, use Google Cloud Platform GPU or an Amazon EC2 GPU instance. It was a an overcast Sunday and the clouds were building up – perfect day to stay indoors and get your RNN through the paces. Read more from kdnuggets.com…

thumbnail courtesy of kdnuggets.com