Written by
Peter Brookes-Smith

Peter Brookes-Smith

Lessons in Life from an Ai Agent

It’s the custom at Objectivity to invite our colleagues to give talks before the Christmas Party that will entertain and educate us all.

This year, I spent some of my personal time exploring Artificial Intelligence and one of my targets was a sub branch called reinforcement learning. It was fascinating and in many ways I found a connection between the trade offs needed within the Ai model and some conscious or unconscious decisions that I’ve made in my life.

I suspect we’re all making similar choices and I thought it could make an interesting topic for one of our Christmas Talks. I wanted to try and illustrate how some of those decisions can affect the outcomes for an Ai agent who wants to achieve a particular goal.

I hoped that by crystallising that trade off and demonstrating the impact in an alternate reality, it might help us all make some of those choices more consciously more of the time.

The video of my talk is here and it’s 16 minutes long:

With this link to one of my personal gitHub pages, you can see the Ai Agent and control his life choices yourself. The obstacles and death zones are generated randomly each time .

The simple rules by which the agent lives his life allow him to learn from experience and solve quite challenging problems. Although I didn’t cover this in my talk, I also designed a maze where one wrong step leads to death and then I set the Ai agent loose to learn the solution. There were more life lessons from that exercise but it’s a longer story.

If you’re interested in learning more about Ai and thinking how it can help solve some of the problems or capitalise on opportunities in your business, please give me a call.

Share this post on


One thought in Comments

  1. Matt

    On a cold, rainy new year’s day, I’m sat in my office with a custom built nlp web crawler that stubbornly refuses to classify my data with a reasonable level of accuracy. I took a break and found myself watching your agent again having first seen it in the wendy house a little while ago.

    It never fails to mesmerise as it hunts around busily trying to avoid oblivion. Anyway, turns out a little diversion did the trick. I normalised some of my feature set and now it’s behaving much better – and that’s a subject for another blog in the near future.

    Loved the video and thanks for inspiring me to dive deep into MNIST and NNs with relentless passion. Look forward to learning more and collaborating on whatever comes our way in 2018. Cheers


Leave a Reply

Required fields are marked *

Read next

Future Decoded 2017 with Objectivity

The same as last year, we had a pleasure to participate in Microsoft Future Decoded – the biggest event organized by Microsoft in the UK. It has a form of a conference, where a lot of interesting speakers from Microsoft and its partners share their achievements, case studies and findings. Moreover, this is a place where Microsoft shares a lot of […]

Read more