by Fred Abler
In the early oughts’ (2000) while working in a CAD research lab at CAL POLY, SLO, I became intensely interested in 3D ‘object-agents’. Today CAD/BIM users might call them “smart objects”, but there’s really a lot more to it than that.
Object-agents are ‘smart objects’… but with the added ability of autonomous behavior in their surrounding environments. 3D object-agents can logically ‘sense’ their environmental context (virtual or physical), and then act or react appropriately.
When they exist, 3D object-agents will take BIM to a whole new level. Instead of simple (post-facto) clash detection, BIM software will be able to assist architects and designers with real time conflict-identification and.. perhaps even CONFLICT RESOLUTION.
[Purple Comment: BIM is a process-bomb that frontloads every project with conflict identification, that then must be resolved using new project delivery models – across a broad set of stakeholders. Until BIM assists in conflict resolution, we do not have a ‘whole product’.]
For example, imagine a lighting designer placing a 3D lighting pendant in the BIM model. But then, it autonomously ‘jumps’ to another location farther down the corridor – farther away from the ‘air vent’ object-agent.
Why? Because these BIM object-agents are logically aware of each other, and recognize a future conflict. They’ve worked it out that every time the ‘air supply’ comes on, the pendant will start to swing unacceptably. So the 3D pendant object-agent moves a pre-defined distance away.
My hunch is that such virtual 3D BIM-bots will behave much like their real world counterparts (i.e. robots). This intuition ultimately led me to read roboticist Rodney Brooks’ masterwork – FLESH AND MACHINES : How Robots will Change Us.
Brooks is a super-genius from South Australia. His first robot (built at the age of 14) played perfect tic-tac-toe. After degrees in pure mathematics, and concurrently with a brilliant academic career, Brooks founded iRobot (1990-2008), and recently Rethink Robotics (2009 – present).
While Director of MIT’s Artificial Intelligence (AI) Labs, Brooks wrote F+M in 2002. The book was the result of a forced vacation with his then wife, in her native Thailand. Fortunately for us, Brooks had little to do for three weeks while his wife’s family “gossiped constantly in Thai”.
Estranged by language and in a remote rainforest, Brooks opted to go on a mental odyssey to determine the true nature of intelligence. The result of this mental walkabout was an epiphany that is (and undoubtedly remains) the greatest insight into AI, to date.
Brooks realized that animals did not walk around with ready-made mental maps of their environment already inside their heads. Rather they must be reacting to cues in their environments spontaneously. This is how they could navigate new environments.
While today, this seems like a BFO (Blinding Flash of Obviousness), at the time it was a massive breakthrough in AI. Brooks called it ‘situated intelligence’… there simply can be no intelligence (as we know it) without a corporeal body interacting with a physical environment.
This led, in turn. to Brook’s ‘ subsumption architecture’, in which he invested layered logic in laboratory robots. Reacting spontaneously to simple cues in physical the environment, the robot’s logic layers cascaded opportunistically, and his MIT robots began exhibiting emergent behaviors that were downright amazing.
Brooks’ approach to AI is now known as nouvell AI. It is a suitably modest approach that focuses on ‘insect-level’ intelligence, like avoiding objects. Reading Brook’s description of Herbert – a robot that searches desks and tables for empty soda cans, which it picks up and carries away – will remind you fondly of Dr. Tyrell in BladeRunner.
Nouvell AI is responsible for the recent success of the JPL Martian buggy-jump. In F+M, Brook’s forward looking Chapter 3 – Planetary Ambassadors “Getting to Mars” is all the more fascinating – now that autono’mobots like the Curiosity Rover have landed on Mars.
And if SUV-sized robots can successfully land themselves on the Red Planet, Well.. they should certainly do able to do a little more around the house! At least that’s what three Swedish Researchers at CAS Royal Institute of Technology are thinking.
These roboticist-researchers are building on Rod Brook’s work on model-based vision with a rather novel crowd sourcing project they call Kinect@Home.
Fig 2 – Scanning your home with you XBox Kinect for Kinect@Home will help researchers create 3D reference models that will make future robots more capable and spatially literate.
To help robot-kind, the Swedish roboticists are imploring 18 Million Microsoft Xbox 360 Kinect owners to create 3D scans of objects, their homes, and even.. themselves. The objective is to generate large libraries of 3D models to analyze and create algorithms to improve robotics and computer vision software.
For example, a domestic robot might be programmed to operate a washing machine. But which washing machine? Without a large library of 3D washing machines, the robots would be impractical – limited to only a handful of machine types.
So if you own a Kinect, and want to help make robot-kind become more whirlpool-enabled, download the free software drivers and plugin at Kinect@Home. Scan your home, and upload 3D models to the cloud.
To date, the Kinect@Home project has a total of 154 Kinect models.
In short – There is something very meta about helping robots learn to recognize other robots! For BIM-sake, I hope that virtually embodied autonomous agents will pivot BIM from a conflict identification technology, to a conflict resolution platform.