Artash Nath, Grade 9 Student, Toronto.

27 February is the World Information Architecture Day. World Information Architecture Day (WIAD) celebrates information architecture and shares knowledge and ideas from analogue to digital, from design to development, from students to practitioners, globally and locally. This year WIAD events were organized in 42 locations. Toronto was one of them where it was organized by the students at the University of Toronto Faculty of Information. There was a diverse line up of speakers.

It was a pleasure to give a talk at the WIAD Toronto 2021 on “Artificial Intelligence and Biases: The Role of Curiosity”. The talk explored the link between information architecture and biases in artificial intelligence algorithms. Artificial Intelligence has become a defining force of our times. It has embedded our lives in more ways than we are aware of – from seemingly innocuous applications such as voice recognition in smartphones and music recommendations based on our previous search history, to the vetting of university applications, job applications, hospital triages, judicial sentencing, and facial recognition used by governments and the private sector.

And this is where the mischief occurs in terms of algorithmic biases embedded in the information architecture and the datasets used for training the artificial intelligence algorithms. While biases have always existed in our society, these were relatively easier to understand in terms of where they were happening, why they were happening, who was responsible for it, and there was usually a recourse to correct those biases.

Algorithmic biases are like black boxes. They are so deeply rooted, require technical knowledge of algorithms, and an understanding of what information is collected, synthesized, and used for training the algorithms, that very few people are able to correlate unfairness meted out to them to biases in algorithms. With the growing use of robotics and artificial intelligence in all aspects of our lives, we have to find innovative ways to tackle the algorithmic biases – may be using the algorithms themselves to identify and declare the biases in them.

See the ARTEMIS Robot in action:

This is the area where I am working on – building curiosity-based robotics and algorithms come to play. They constantly question the data being used in the algorithms through multimodal ways to correct for the biases. When they see most candidates rejected in a university application are of a particular race or economic strata, then they question why this is happening rather than becoming better trained to reject candidates of those race or strata. These curiosity-based algorithms have to be built in natively at the information architecture stage where they can create systems that are fair, explainable, correctable and have respect for human rights.

Watch my full presentation at WIAD 2021 Toronto here:

Contact Details: or

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s