Mobile robots break traditions


Putting people first: Safety, efficiency, and the autonomous workplace

By Daniel Theobald

A new wave of automation is building, and it will affect (almost) everything. Robotic technology was once limited to operating behind light curtains, cages, and doors. Now, mobile robots are breaking away from these traditional confines to appear in dynamic, peopled environments like offices, health-care facilities, classrooms, factory floors, agriculture, and distribution centers.

The convergence of better sensors, actuators, computation, and algorithms has reached a crucial threshold, bringing advanced robotics out of the lab and driving significant industry investment into these new technologies. Combined with increased global competition and cost reduction pressure, it is driving the start of the “Automation Age,” the most influential period of technological advancement since the invention of the steam engine.

Investment in robotics promises handsome dividends for decades to come, and may well determine the next global superpowers. Automation is the key to onshoring much of the U.S. manufacturing base lost in the past several decades, as well as presenting new opportunities in such industries as apparel manufacturing, food distribution and service, and entertainment. To establish a global economic leadership position will require a commitment to invest in automation technologies if the country is to capitalize on these benefits.

Automation is an ideal tool for performing the dull, dirty, and dangerous—shifting humans to greater value-adding activities. New technologies like advanced sensing and flexible manipulation are making robots safer and more efficient in interacting with humans. Meanwhile, interoperability and resource optimization are preparing workplaces for close collaboration of humans and robots as well as lights-out automation.

Heavy logistics can be performed by mobile robots autonomously and safely, without risk of injury to human operators.

Safety through advanced sensing

Mobile robots that operate safely in close proximity and in cooperation with humans need several layers of sensing, planning, and motion control. Advanced sensing gives robots the ability to build accurate three-dimensional models of their surroundings and then reason about their environment beyond just distinguishing obstacles from free space. This level of understanding was superfluous when robots were in cages, but is critical for them to be effective coworkers with humans.

For example, modern robots can distinguish between a human and an inanimate object, understand the difference between the two, and take the appropriate action. The equipment is not just sensing the surroundings; it is making sense of its environment. When a robot comes into contact with a human, compliant actuation and tactile sensing allows it to move in a safe manner that avoids injury and minimizes any force of impact. Understanding and making sense of its environment is a key enabler to plan tasks and motions that are predictable and acceptable to human coworkers.

These inherently safe systems stand in stark contrast to industrial robot arms of the past that were trained for a specific function. They had, perhaps, one sensor that knew when something had entered its space and shut down the whole system.

Sensorized equipment

The same advance sensing that gives a robot its autonomous capabilities can first be used for basic safety monitoring aboard equipment already used by human operators. By “sensorizing” a piece of industrial equipment for safety, all sectors can prevent injury, damage, and delays, and save billions of dollars annually.

Sensorizing industrial equipment for safety implicitly includes computing capabilities to process the sensor data. The processing power already available can provide more capabilities than just operational aids and awareness; features can be introduced to make it more fully autonomous. For example, a robot begins as a safety-enabled operator-controlled unit—much like a car today is outfitted with forward collision avoidance and side-view assist. All its sensors are enabled and collecting data, but its ability to “act” on its environment is limited to alerting the human operator or bringing the vehicle to a safe stop.

Human-operated machines can use information from sensors to monitor blind spots and warn the driver or even stop the machine. Robotic vehicles can behave appropriately and predictably if they know that pedestrians are present.

For example, a forklift that is fitted with sensors for safety could perceive a human in its path and come to a full stop before the operator can even react, potentially curbing the 110,000 annual major forklift accidents that are responsible for one in six workplace deaths. The National Safety Council estimated that the total cost of wage and productivity losses from safety-related death and injury was $188.9 billion in 2011. With the cost savings from reduced accidents through a safety-enabled sensorized forklift, businesses can fund additional features toward full autonomy.

Machine perception

Advanced algorithms allow robots to identify humans in various poses even with just monocular cameras, and therefore enable advanced safety features. Adding this information as an overlay to existing rear view or other assistive camera systems available for industrial equipment is a simple first step. Making the same information available to a safety system installed on the equipment can do even better by removing the need for the operator to continually interpret a marked-up video feed or other interface.

That augmented safety system could be configured to audibly alert the operator of the presence of humans in the vicinity, and also limit the equipment’s velocity in directions that bring it close to detected pedestrians. The simple presence of a pedestrian near industrial equipment is not necessarily cause for concern. Much depends on what that person is doing, where the person is looking, and whether or not he or she is aware of the machine close by. Understanding a person’s activity and gaze from sensor data can help to determine what kind of warning or action is appropriate.

Sensor information from many different sensors is combined to make sense of the environment around the machine.

In addition to sensing a human coworker’s gaze and pose, a robot can take direction from nonverbal cues. This is particularly helpful in noisy environments. For example, by recognizing air marshalling gestures, a pedestrian can command or interact with a piece of automated equipment, whether the person is actively involved in the robot’s activity or pursuing an unrelated activity that happens to bring him or her close to the machine.

Flexible manipulation

Being able to effectively manipulate objects in semistructured environments gives robots the dexterity and agility to not only work alongside humans more efficiently, but to match or exceed them in skill and speed. In the past, for robot arms to be effective, the work piece had to be precisely fixtured to millimeter accuracy. This was because the arms usually did not actively sense the work piece, and instead carried out blind motions based on prerecorded paths. The new sensing and computing allows much of the costly fixturing to go away and enables robot arms to provide value in new areas.

One benefit of this trend is that it allows robots to weigh less. For robots to achieve blind motion submillimeter accuracy over a large workspace, they needed to be exceptionally rigid. Any flexibility in the arm would lead to a lack of accuracy. This is why traditional arms generally weighed 100 times more than objects they were able to lift. Rigidity requires bulk.

In contrast, the human arm can lift about 10 times its own weight. When you “close the loop” with vision sensors, the bulk can go away, and with it much of the cost as well. Lightweight, low-precision arms coupled with advanced visions systems are being successfully applied in many areas. Taking the next step of effectively combining autonomous mobility and manipulation will allow a new range of applications that only humans could previously perform.

Achieving efficient operations

Enabled by machine vision, safe and nimble next-generation robots must integrate with their surroundings and their coworkers. In the same way that consumers expect their PCs to run third-party software and to connect to the Internet, robots must be able to work with other machines and with people in their environment.


Unlike a pallet truck or a conveyor belt, robots have added intelligence that, when connected to other things in a dynamic environment, creates enormous potential for efficiency and cost savings. If your robots cannot interoperate, their value is ultimately limited. To maximize return on investment (ROI) through automation requires a multidisciplinary approach to integrate with other hardware and software systems in your facility through open interfaces and standard protocols. These systems include other robots from the same or third-party vendors, facility doors and elevators, and fire systems, as well as asset tracking, order fulfillment, or warehouse management software.

To that end, it is important to find a partner who will bring an interoperable automation ecosystem to the table, not just a single robotic product that operates independently. This approach allows for quick ROI by plugging specific robots into critical-path activities, but also lays the groundwork for fully autonomous lights-out operations and long-range ROI through complete system integration.

Resource optimization

Vecna’s QC Bot navigates corridors and warehouses autonomously, removing the burden of manual transport.

When imagining robots, humans, and facility infrastructure as part of the same system, organizations must also manage it as such, especially in environments where robots navigate in peopled spaces and often share responsibility with people. A mission optimization planner improves system efficiency by employing the unique strengths of both humans and robots through resource management.

The ultimate goal of the mission optimization planner is to sort and process all tasks in the shortest time and with the fewest resources possible, while ensuring that priorities are properly handled. Mission optimization planners actively control workflow rates, allowing humans and robots to work together safely and at maximum efficiency.

A mission optimization planner will also autonomously assign on-demand tasks to either robot or human based on resource availability and the urgency of the task. If a bottleneck starts to occur that cannot be buffered autonomously, the system can immediately inform supervisors and recommend reallocation of staff to address the issue. Mission optimization planners provide the strategic advantage of a single backbone to meet a large variety of needs, including multiple disparate autonomous platforms working together seamlessly with humans within a facility.

The autonomous workplace

What does all of this mean? With advanced sensing, mobility, manipulation, interoperability, and resource optimization, robots will take on more activities traditionally performed by humans. Many interpret this trend as a threat to the human worker. And it is if we do not start to change our ways of thinking.

I would like to see robots make it possible for us to accomplish more than just getting ahead of the Joneses or outcompeting others in the global marketplace. Robotics should allow us to raise the standard of living for everyone on this planet and allow us to pursue the arts and sciences to a greater degree than ever before. If we are to enable this social structure, we need to start figuring it out now.

I say let the robots do the dirty, dull, and dangerous, and let humans enjoy the life they have worked so hard to build since the invention of the steam engine.

Fast Forward

  • Robots are leaving the lab and making their way into industrial applications.
  • Advanced sensing enables robots to figure out their environment.
  • Environment interoperability and resource optimization prevent bottlenecks before they occur. 

About the Author

Daniel Theobald is chief technology officer and cofounder of Vecna and cofounder of MassRobotics. He founded Vecna in 1998 with the mission to empower humanity through transformative technologies. Without taking any outside investment, Theobald grew his company to an extensive network of employees, partners, and contractors, serving a worldwide customer base. A true visionary, Theobald has designed and developed several robots, including the Bear disaster recovery robot and the QC Bot logistics solution, as well as advanced automation components, such as machine perception technology. He serves on the board of MassRobotics.

Reader Feedback

We want to hear from you! Please send us your comments and questions about this topic to