robotics planet

http://planet-robotics.net

all robotics in one place

All content on this page is provided by the aggregrated blogs listed on the right. Please follow the links to get there.

A Path Towards Reasonable Autonomous Weapons Regulation

 − at 17:50, 21. Oct. 2019

Editor’s Note: The debate on autonomous weapons systems has been escalating over the past several years as the underlying technologies evolve to the point where their deployment in a military context seems inevitable. IEEE Spectrum has published a variety of perspectives on this issue. In summary, while there is a compelling argument to be made that autonomous weapons are inherently unethical and should be banned, there is also a compelling argument to be made that autonomous weapons could potentially make conflicts less harmful, especially to non-combatants. Despite an increasing amount of international attention (including from the United Nations), progress towards consensus, much less regulatory action, has been slow. The following workshop paper on autonomous weapons systems policy is remarkable because it was authored by a group of experts with very different (and in some cases divergent) views on the issue. Even so, they were able to reach consensus on a roadmap that all agreed was worth considering. It’s collaborations like this that could be the best way to establish a reasonable path forward on such a contentious issue, and with the permission of the authors, we’re excited to be able to share this paper (originally posted on Georgia Tech’s Mobile Robot Lab website) with you in its entirety.

Autonomous Weapon Systems: A Roadmapping Exercise

Over the past several years, there has been growing awareness and discussion surrounding the possibility of future lethal autonomous weapon systems that could fundamentally alter humanity’s relationship with violence in war. Lethal autonomous weapons present a host of legal, ethical, moral, and strategic challenges. At the same time, artificial intelligence (AI) technology could be used in ways that improve compliance with the laws of war and reduce non-combatant harm. Since 2014, states have come together annually at the United Nations to discuss lethal autonomous weapons systems1. Additionally, a growing number of individuals and non-governmental organizations have become active in discussions surrounding autonomous weapons, contributing to a rapidly expanding intellectual field working to better understand these issues. While a wide range of regulatory options have been proposed for dealing with the challenge of lethal autonomous weapons, ranging from a preemptive, legally binding international treaty to reinforcing compliance with existing laws of war, there is as yet no international consensus on a way forward.

The lack of an international policy consensus, whether codified in a formal document or otherwise, poses real risks. States could fall victim to a security dilemma in which they deploy untested or unsafe weapons that pose risks to civilians or international stability. Widespread proliferation could enable illicit uses by terrorists, criminals, or rogue states. Alternatively, a lack of guidance on which uses of autonomy are acceptable could stifle valuable research that could reduce the risk of non-combatant harm.

International debate thus far has predominantly centered around whether or not states should adopt a preemptive, legally-binding treaty that would ban lethal autonomous weapons before they can be built. Some of the authors of this document have called for such a treaty and would heartily support it, if states were to adopt it. Other authors of this document have argued an overly expansive treaty would foreclose the possibility of using AI to mitigate civilian harm. Options for international action are not binary, however, and there are a range of policy options that states should consider between adopting a comprehensive treaty or doing nothing.

The purpose of this paper is to explore the possibility of a middle road. If a roadmap could garner sufficient stakeholder support to have significant beneficial impact, then what elements could it contain? The exercise whose results are presented below was not to identify recommendations that the authors each prefer individually (the authors hold a broad spectrum of views), but instead to identify those components of a roadmap that the authors are all willing to entertain2. We, the authors, invite policymakers to consider these components as they weigh possible actions to address concerns surrounding autonomous weapons3.

Summary of Issues Surrounding Autonomous Weapons

There are a variety of issues that autonomous weapons raise, which might lend themselves to different approaches. A non-exhaustive list of issues includes:

  • The potential for beneficial uses of AI and autonomy that could improve precision and reliability in the use of force and reduce non-combatant harm.
  • Uncertainty about the path of future technology and the likelihood of autonomous weapons being used in compliance with the laws of war, or international humanitarian law (IHL), in different settings and on various timelines.
  • A desire for some degree of human involvement in the use of force. This has been expressed repeatedly in UN discussions on lethal autonomous weapon systems in different ways.
  • Particular risks surrounding lethal autonomous weapons specifically targeting personnel as opposed to vehicles or materiel.
  • Risks regarding international stability.
  • Risk of proliferation to terrorists, criminals, or rogue states.
  • Risk that autonomous systems that have been verified to be acceptable can be made unacceptable through software changes.
  • The potential for autonomous weapons to be used as scalable weapons enabling a small number of individuals to inflict very large-scale casualties at low cost, either intentionally or accidentally.

Summary of Components

  1. A time-limited moratorium on the development, deployment, transfer, and use of anti-personnel lethal autonomous weapon systems4. Such a moratorium could include exceptions for certain classes of weapons.
  2. Define guiding principles for human involvement in the use of force.
  3. Develop protocols and/or technological means to mitigate the risk of unintentional escalation due to autonomous systems.
  4. Develop strategies for preventing proliferation to illicit uses, such as by criminals, terrorists, or rogue states.
  5. Conduct research to improve technologies and human-machine systems to reduce non-combatant harm and ensure IHL compliance in the use of future weapons.

Component 1:

States should consider adopting a five-year, renewable moratorium on the development, deployment, transfer, and use of anti-personnel lethal autonomous weapon systems. Anti-personnel lethal autonomous weapon systems are defined as weapons systems that, once activated, can select and engage dismounted human targets without further intervention by a human operator, possibly excluding systems such as:

  • Fixed-point defensive systems with human supervisory control to defend human-occupied bases or installations
  • Limited, proportional, automated counter-fire systems that return fire in order to provide immediate, local defense of humans
  • Time-limited pursuit deterrent munitions or systems
  • Autonomous weapon systems with size above a specified explosive weight limit that select as targets hand-held weapons, such as rifles, machine guns, anti-tank weapons, or man-portable air defense systems, provided there is adequate protection for non-combatants and ensuring IHL compliance5

The moratorium would not apply to:

  • Anti-vehicle or anti-materiel weapons
  • Non-lethal anti-personnel weapons
  • Research on ways of improving autonomous weapon technology to reduce non-combatant harm in future anti-personnel lethal autonomous weapon systems
  • Weapons that find, track, and engage specific individuals whom a human has decided should be engaged within a limited predetermined period of time and geographic region

Motivation:

This moratorium would pause development and deployment of anti-personnel lethal autonomous weapons systems to allow states to better understand the systemic risks of their use and to perform research that improves their safety, understandability, and effectiveness. Particular objectives could be to:

  • ensure that, prior to deployment, anti-personnel lethal autonomous weapons can be used in ways that are equal to or outperform humans in their compliance with IHL (other conditions may also apply prior to deployment being acceptable);
  • lay the groundwork for a potentially legally binding diplomatic instrument; and
  • decrease the geopolitical pressure on countries to deploy anti-personnel lethal autonomous weapons before they are reliable and well-understood.

Compliance Verification:

As part of a moratorium, states could consider various approaches to compliance verification. Potential approaches include:

  • Developing an industry cooperation regime analogous to that mandated under the Chemical Weapons Convention, whereby manufacturers must know their customers and report suspicious purchases of significant quantities of items such as fixed-wing drones, quadcopters, and other weaponizable robots.
  • Encouraging states to declare inventories of autonomous weapons for the purposes of transparency and confidence-building.
  • Facilitating scientific exchanges and military-to-military contacts to increase trust, transparency, and mutual understanding on topics such as compliance verification and safe operation of autonomous systems.
  • Designing control systems to require operator identity authentication and unalterable records of operation; enabling post-hoc compliance checks in case of plausible evidence of non-compliant autonomous weapon attacks.
  • Relating the quantity of weapons to corresponding capacities for human-in-the-loop operation of those weapons.
  • Designing weapons with air-gapped firing authorization circuits that are connected to the remote human operator but not to the on-board automated control system.
  • More generally, avoiding weapon designs that enable conversion from compliant to non-compliant categories or missions solely by software updates.
  • Designing weapons with formal proofs of relevant properties—e.g., the property that the weapon is unable to initiate an attack without human authorization. Proofs can, in principle, be provided using cryptographic techniques that allow the proofs to be checked by a third party without revealing any details of the underlying software.
  • Facilitate access to (non-classified) AI resources (software, data, methods for ensuring safe operation) to all states that remain in compliance and participate in transparency activities.

Component 2:

Define and universalize guiding principles for human involvement in the use of force.

  • Humans, not machines, are legal and moral agents in military operations.
  • It is a human responsibility to ensure that any attack, including one involving autonomous weapons, complies with the laws of war.
  • Humans responsible for initiating an attack must have sufficient understanding of the weapons, the targets, the environment and the context for use to determine whether that particular attack is lawful.
  • The attack must be bounded in space, time, target class, and means of attack in order for the determination about the lawfulness of that attack to be meaningful.
  • Militaries must invest in training, education, doctrine, policies, system design, and human-machine interfaces to ensure that humans remain responsible for attacks.

Component 3:

Develop protocols and/or technological means to mitigate the risk of unintentional escalation due to autonomous systems.

Specific potential measures include:

  • Developing safe rules for autonomous system behavior when in proximity to adversarial forces to avoid unintentional escalation or signaling. Examples include:
    • No-first-fire policy, so that autonomous weapons do not initiate hostilities without explicit human authorization.
    • A human must always be responsible for providing the mission for an autonomous system.
    • Taking steps to clearly distinguish exercises, patrols, reconnaissance, or other peacetime military operations from attacks in order to limit the possibility of reactions from adversary autonomous systems, such as autonomous air or coastal defenses.
  • Developing resilient communications links to ensure recallability of autonomous systems. Additionally, militaries should refrain from jamming others’ ability to recall their autonomous systems in order to afford the possibility of human correction in the event of unauthorized behavior.

Component 4:

Develop strategies for preventing proliferation to illicit uses, such as by criminals, terrorists, or rogue states:

  • Targeted multilateral controls to prevent large-scale sale and transfer of weaponizable robots and related military-specific components for illicit use.
  • Employ measures to render weaponizable robots less harmful (e.g., geofencing; hard-wired kill switch; onboard control systems largely implemented in unalterable, non-reprogrammable hardware such as application-specific integrated circuits).

Component 5:

Conduct research to improve technologies and human-machine systems to reduce non-combatant harm and ensure IHL-compliance in the use of future weapons, including:

  • Strategies to promote human moral engagement in decisions about the use of force
  • Risk assessment for autonomous weapon systems, including the potential for large-scale effects, geopolitical destabilization, accidental escalation, increased instability due to uncertainty about the relative military balance of power, and lowering thresholds to initiating conflict and for violence within conflict
  • Methodologies for ensuring the reliability and security of autonomous weapon systems
  • New techniques for verification, validation, explainability, characterization of failure conditions, and behavioral specifications.

About the Authors (in alphabetical order)

Ronald Arkin directs the Mobile Robot Laboratory at Georgia Tech.

Leslie Kaelbling is co-director of the Learning and Intelligent Systems Group at MIT.

Stuart Russell is a professor of computer science and engineering at UC Berkeley.

Dorsa Sadigh is an assistant professor of computer science and of electrical engineering at Stanford.

Paul Scharre directs the Technology and National Security Program at the Center for a New American Security (CNAS).

Bart Selman is a professor of computer science at Cornell.

Toby Walsh is a professor of artificial intelligence at the University of New South Wales (UNSW) Sydney.

The authors would like to thank Max Tegmark for organizing the three-day meeting from which this document was produced.


1 Autonomous Weapons System (AWS): A weapon system that, once activated, can select and engage targets without further intervention by a human operator. BACK TO TEXT↑

2 There is no implication that some authors would not personally support stronger recommendations. BACK TO TEXT↑

3 For ease of use, this working paper will frequently shorten “autonomous weapon system” to “autonomous weapon.” The terms should be treated as synonymous, with the understanding that “weapon” refers to the entire system: sensor, decision-making element, and munitionBACK TO TEXT↑

4 Anti-personnel lethal autonomous weapon system: A weapon system that, once activated, can select and engage dismounted human targets with lethal force and without further intervention by a human operatorBACK TO TEXT↑

5 The authors are not unanimous about this item because of concerns about ease of repurposing for mass-casualty missions targeting unarmed humans. The purpose of the lower limit on explosive payload weight would be to minimize the risk of such repurposing. There is precedent for using explosive weight limit as a mechanism of delineating between anti-personnel and anti-materiel weapons, such as the 1868 St. Petersburg Declaration Renouncing the Use, in Time of War, of Explosive Projectiles Under 400 Grammes WeightBACK TO TEXT↑

[original entry]

Video Friday: Transferring Human Motion to a Mobile Robot Manipulator

 − at 23:05, 18. Oct. 2019

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau

Let us know if you have suggestions for next week, and enjoy today’s videos.


We are very sad to say that MIT professor emeritus Woodie Flowers has passed away. Flowers will be remembered for (among many other things, like co-founding FIRST) the MIT 2.007 course that he began teaching in the mid-1970s, famous for its student competitions.

These competitions got a bunch of well-deserved publicity over the years; here’s one from 1985:

And the 2.007 competitions are still going strong—this year’s theme was Moonshot, and you can watch a replay of the event here.

[ MIT ]


Looks like Aibo is getting wireless integration with Hitachi appliances, which turns out to be pretty cute:

What is this magical box where you push a button and 60 seconds later fluffy pancakes come out?!

[ Aibo ]


LiftTiles are a “modular and reconfigurable room-scale shape display” that can turn your floor and walls into on-demand structures.

[ LiftTiles ]


Ben Katz, a grad student in MIT’s Biomimetics Robotics Lab, has been working on these beautiful desktop-sized Furuta pendulums:

That’s a crowdfunding project I’d pay way too much for.

[ Ben Katz ]


A clever bit of cable manipulation from MIT, using GelSight tactile sensors.

[ Paper ]


A useful display of industrial autonomy on ANYmal from the Oxford Robotics Group.

This video is of a demonstration for the ORCA Robotics Hub showing the ANYbotics ANYmal robot carrying out industrial inspection using autonomy software from Oxford Robotics Institute.

[ ORCA Hub ] via [ DRS ]

Thanks Maurice!


Meet Katie Hamilton, a software engineer at NASA’s Ames Research Center, who got into robotics because she wanted to help people with daily life. Katie writes code for robots, like Astrobee, who are assisting astronauts with routine tasks on the International Space Station.

[ NASA Astrobee ]


Transferring human motion to a mobile robotic manipulator and ensuring safe physical human-robot interaction are crucial steps towards automating complex manipulation tasks in human-shared environments. In this work we present a robot whole-body teleoperation framework for human motion transfer. We validate our approach through several experiments using the TIAGo robot, showing this could be an easy way for a non-expert to teach a rough manipulation skill to an assistive robot.

[ Paper ]


This is pretty cool looking for an autonomous boat, but we’ll see if they can build a real one by 2020 since at the moment it’s just an average rendering.

[ ProMare ]


I had no idea that asparagus grows like this. But, sure does make it easy for a robot to harvest.

[ Inaho ]


Skip to 2:30 in this Pepper unboxing video to hear the noise it makes when tickled.

[ HIT Lab NZ ]


In this interview, Jean Paul Laumond discusses his movement from mathematics to robotics and his career contributions to the field, especially in regards to motion planning and anthropomorphic motion. Describing his involvement at CNRS and in other robotics projects, such as HILARE, he comments on the distinction in perception between the robotics approach and a mathematics one.

[ IEEE RAS History ]


Here’s a couple of videos from the CMU Robotics Institute archives, showing some of the work that took place over the last few decades.

[ CMU RI ]


In this episode of the Artificial Intelligence Podcast, Lex Fridman speaks with David Ferrucci from IBM about Watson and (you guessed it) artificial intelligence.

David Ferrucci led the team that built Watson, the IBM question-answering system that beat the top humans in the world at the game of Jeopardy. He is also the Founder, CEO, and Chief Scientist of Elemental Cognition, a company working engineer AI systems that understand the world the way people do. This conversation is part of the Artificial Intelligence podcast.

[ AI Podcast ]


This week’s CMU RI Seminar is by Pieter Abbeel from UC Berkeley, on “Deep Learning for Robotics.”

Programming robots remains notoriously difficult. Equipping robots with the ability to learn would by-pass the need for what otherwise often ends up being time-consuming task specific programming. This talk will describe recent progress in deep reinforcement learning (robots learning through their own trial and error), in apprenticeship learning (robots learning from observing people), and in meta-learning for action (robots learning to learn). This work has led to new robotic capabilities in manipulation, locomotion, and flight, with the same approach underlying advances in each of these domains.

[ CMU RI ]


[original entry]

Skydio's Dock in a Box Enables Long-Term Autonomy for Drone Applications

 − at 20:30, 16. Oct. 2019

The word “autonomy” in the context of drones (or really any other robot) can mean a whole bunch of different things. Skydio’s newest drone, which you can read lots more about here, is probably the most autonomous drone that we’ve ever seen, in the sense that it can fly itself while tracking subjects and avoiding obstacles. But as soon as the Skydio 2 lands, it’s completely helpless, dependent on a human to pick it up, pack it into a case, and take it back home to recharge.

For consumer applications, this is not a big deal. But for industry, a big part of the appeal of autonomy is being able to deliver results with a minimum of human involvement, since humans are expensive and almost always busy doing other things.

Today, Skydio is announcing the Skydio 2 Dock, a (mostly) self-contained home base that a Skydio 2 drone can snuggle up inside to relax and recharge in between autonomous missions, meaning that you can set it up almost anywhere and get true long-term full autonomy from your drone.

Obviously, this is something that you can only do with the level of autonomy that you get with Skydio’s drone, because there’s no human pilot in the loop. From launch to landing on that alarmingly small platform, the drone can fly itself, although a remote human can step in if they want to at any point. Once the drone is safely back in its carryon-size weatherproof box, the drone spends about an hour recharging (you’ll need to plug the box in for this), and then it’s ready to go again for a 23-minute flight. Conceivably you could have the drone in the air every hour and a half collecting data for you.

Skydio’s dock is an integral part of their first industry partnership with DroneDeploy, a mapping platform for drones. One potential application is that you could have a Skydio 2 drone living inside of a dock on a construction site, and then it’ll fly around the site as often as you need it to and send you back a map of how much things have progressed. Since the drone is always on-site and ready to go and doesn’t need to coordinate around a human operator, it can give you data on-demand in near-real time, or even after the fact: tell it to fly every day, and then if you want to know what happened a week ago, the data will be there—no human involvement means that the cost to collect data is low enough that there’s no reason not to just do it pretty much constantly.

Well, there’s one reason not to just do it all the time, which is that in the United States it’s probably not allowed by the Federal Aviation Administration (FAA). We asked Skydio about this, and here’s what their CEO Adam Bry said:

“Under current regulations a Beyond Visual Line of Sight (BVLOS) waiver would be required. We think that a small, light, safe drone with advanced navigation and collision avoidance is an excellent candidate for persistent autonomous operation. Our general view is that it’s our responsibility to establish that the system satisfies all relevant safety and logistical concerns, and work with regulators to roll this technology out responsibly.”

The FAA does grant a fair number of waivers like these, and as Bry says, Skydio has a platform that they can (hopefully) show to be safe and reliable enough that the FAA will be cool with it. But this is yet another case where regulation is falling behind technology, and it means that you can’t just start using this system for your business without having to jump through some government hoops first. This is the problem with being a company that’s so far ahead of the curve, I guess—sometimes you have to wait for the rest of the world to catch up.

Skydio also sees its dock system as being valuable for first responders, where real-time data from a drone can potentially save lives. Instead of someone on-scene having to devote their attention to drone management. In these cases, having a person intermittently in the loop to request specific views might be a more typical use case, but not having to worry about takeoff or landing or flying would make things much more efficient: you can just ask for the data you want and the drone will provide it, and it won’t bother you about anything else.

We’re told that Skydio will announce pricing of the Skydio 2 Dock when they have general availability early next year.

[original entry]

OpenAI Teaches Robot Hand to Solve Rubik's Cube

 − at 18:00, 15. Oct. 2019

In-hand manipulation is a skill that, as far as I’m aware, humans in general don’t actively learn. We just sort of figure it out by doing other, more specific tasks with our fingers and hands. This makes it particularly tricky to teach robots to solve in-hand manipulation tasks because the way we do it is through experimentation and trial and error. Robots can learn through trial and error as well, but since it usually ends up being mostly error, it takes a very, very long time.

Last June, we wrote about OpenAI’s approach to teaching a five-fingered robot hand to manipulate a cube. The method that OpenAI used leveraged the same kind of experimentation and trial and error, but in simulation rather than on robot hardware. For complex tasks that take a lot of finesse, simulation generally translates poorly into real-world skills, but OpenAI made their system super robust by introducing a whole bunch of randomness into the simulation during the training process. That way, even if the simulation didn’t perfectly match reality (which it didn’t), the system could still handle the kinds of variations that it experienced on the real-world hardware.

In a preprint paper published online today, OpenAI has managed to teach its robot hand to solve a much more difficult version of in-hand cube manipulation: single-handed solving of a 3x3 Rubik’s cube. The new work is also based on the idea of solving a problem using advanced simulations and then transferring the solution to a real-world system, or what researchers call “sim2real.” In the new paper, OpenAI says the new approach “vastly improved sim2real transfer.”

The initial step was to break down the robot manipulation of the Rubik’s cube into two different tasks: 1. rotating a single face of the cube 90 degrees in either direction, and 2. flipping the cube to bring a different face to the top. Since rotating the top face is much simpler for the robot than rotating other faces, the most reliable strategy is to just do a 90-degree flip to get the face you want to rotate on top. The actual process of solving the cube is computationally straightforward, although the solving process is optimized for the motions that the robot can perform rather than the solve that would take the least number of steps.

The physical setup that’s doing the real-world cube solving is a Shadow Dexterous E Series Hand with a PhaseSpace motion capture system, plus RGB cameras for visual pose estimation. The cube that’s being manipulated is also pretty fancy: It’s stuffed with sensors that report the orientation of each face with an accuracy of five degrees, which is necessary because it’s otherwise very difficult to know the state of a Rubik’s cube when some of its faces are occluded. 

While the video makes it easy to focus on the physical robot, the magic is mostly happening in simulation, and transferring things learned in simulation to the real world. Again, the key to this is domain randomization—jittering parts of the simulation around so that your system has to adapt to different situations similar to those that might be encountered in the real-world. For example, maybe you slightly alter the weight of the cube, or change the friction of the fingertips a little bit, or turn down the lighting. If your system can handle these simulated variations, it’ll be more robust to real-world operation.

When we spoke to last year to Jonas Schneider (one of the authors of the cube manipulation work) and asked him where he thought that system was the weakest, he said that the biggest problem at that point was that the randomizations were both task-specific and hand designed. It’s probably not surprising, then, that one of the big contributions of the Rubik’s cube work is “a novel method for automatically generating a distribution over randomized environments for training reinforcement learning policies and vision state estimators,” which the researchers call automatic domain randomization (ADR). Here’s why ADR is important, according to the paper:

Our main hypothesis that motivates ADR is that training on a maximally diverse distribution over environments leads to transfer via emergent meta-learning. More concretely, if the model has some form of memory, it can learn to adjust its behavior during deployment to improve performance on the current environment over time, i.e. by implementing a learning algorithm internally. We hypothesize that this happens if the training distribution is so large that the model cannot memorize a special-purpose solution per environment due to its finite capacity. ADR is a first step in this direction of unbounded environmental complexity: it automates and gradually expands the randomization ranges that parameterize a distribution over environments. 

Special-purpose solutions per environment are bad, because they work for that environment, but not for other environments. You can think of each little tweak to a simulation as creating a new environment, and the idea behind ADR is to automate these tweaks to create so many new environments that the system is forced to instead come up with general solutions that can work for many different environments all at once. This reflects the robustness required for real-world operation, where no two environments are ever exactly alike. It turns out that ADR is both better and more efficient than the previous manual tuning, say the researchers:

ADR clearly leads to improved transfer with much less need for hand-engineered randomizations. We significantly outperformed our previous best results, which were the result of multiple months of iterative manual tuning.

In terms of results, the researchers were mostly concerned with how many flips and rotations the system could do in a row without failing, rather than how many complete solves it was capable of. It sounds like a complete solve was a bit of an outlier—the starting configuration of the cube could be solved by the system in 43 successful moves, while the average successful run of the best trained policy (continuously trained over multiple months) was about 27 moves. Sixty percent of the time, the system could get halfway to a complete solve, and it made it the entire way 20 percent of the time.

The researchers point out that the method they’ve developed here is general purpose, and you can train a real-world robot to do pretty much any task that you can adequately simulate. You don’t need any real-world training at all, as long as your simulations are diverse enough, which is where the automatic domain randomization comes in. The long-term goal is to reduce the task specialization that’s inherent to most robots, which will help them be more useful and adaptable in real-world applications.


Lastly, just for reference, here’s what (I think) is the current 3x3 cube world record is, set just a few days ago by Max Park:

Wow.

It’s interesting that it appears to be faster and/or more efficient for a human to use table contact to augment their own dexterity. We’ve seen other robots make use of environmental contact for manipulation; it would be cool if OpenAI threw a surface into the simulation to see if their system could make use of it.

[ OpenAI ]

[original entry]

Labrador Systems Developing Affordable Assistive Robots for the Home

 − at 16:00, 15. Oct. 2019

Developing robots for the home is still a challenge, especially if you want those robots to interact with people and help them do practical, useful things. However, the potential markets for home robots are huge, and one of the most compelling markets is for home robots that can assist humans who need them. Today, Labrador Systems, a startup based in California, is announcing a pre-seed funding round of $2 million (led by SOSV’s hardware accelerator HAX with participation from Amazon’s Alexa Fund and iRobot Ventures, among others) with the goal of expanding development and conducting pilot studies of  “a new [assistive robot] platform for supporting home health.”

Labrador was founded two years ago by Mike Dooley and Nikolai Romanov. Both Mike and Nikolai have backgrounds in consumer robotics at Evolution Robotics and iRobot, but as an ’80s gamer, Mike’s bio (or at least the parts of his bio on LinkedIn) caught my attention: From 1995 to 1997, Mike worked at Brøderbund Software, helping to manage play testing for games like Myst and Riven and the Where in the World is Carmen San Diego series. He then spent three years at Lego as the product manager for MindStorms. After doing some marginally less interesting things, Mike was the VP of product development at Evolution Robotics from 2006 to 2012, where he led the team that developed the Mint floor sweeping robot. Evolution was acquired by iRobot in 2012, and Mike ended up as the VP of product development over there until 2017, when he co-founded Labrador.

I was pretty much sold at Where in the World is Carmen San Diego (the original version of which I played from a 5.25” floppy on my dad’s Apple IIe)*, but as you can see from all that other stuff, Mike knows what he’s doing in robotics as well.

And according to Labrador’s press release, what they’re doing is this:

Labrador Systems is an early stage technology company developing a new generation of assistive robots to help people live more independently. The company’s core focus is creating affordable solutions that address practical and physical needs at a fraction of the cost of commercial robots. … Labrador’s technology platform offers an affordable solution to improve the quality of care while promoting independence and successful aging. 

Labrador’s personal robot, the company’s first offering, will enter pilot studies in 2020.

That’s about as light on detail as a press release gets, but there’s a bit more on Labrador’s website, including:

  • Our core focus is creating affordable solutions that address practical and physical needs. (we are not a social robot company)
  • By affordable, we mean products and technologies that will be available at less than 1/10th the cost of commercial robots. 
  • We achieve those low costs by fusing the latest technologies coming out of augmented reality with robotics to move things in the real world.

The only hardware we’ve actually seen from Labrador at this point is a demo that they put together for Amazon’s re:MARS conference, which took place a few months ago, showing a “demonstration project” called Smart Walker:

This isn’t the home assistance robot that Labrador got its funding for, but rather a demonstration of some of their technology. So of course, the question is, what’s Labrador working on, then? It’s still a secret, but Mike Dooley was able to give us a few more details.

IEEE Spectrum: Your website shows a smart walker concept—how is that related to the assistive robot that you’re working on?

Mike Dooley: The smart walker was a request from a major senior living organization to have our robot (which is really good at navigation) guide residents from place to place within their communities. To test the idea with residents, it turned out to be much quicker to take the navigation system from the robot and put it on an existing rollator walker. So when you see the clips of the technology in the smart walker video on our website, that’s actually the robot’s navigation system localizing in real time and path planning in an environment.

“Assistive robot” can cover a huge range of designs and capabilities—can you give us any more detail about your robot, and what it’ll be able to do?

One of the core features of our robot is to help people move things where they have difficulty moving themselves, particularly in the home setting. That may sound trivial, but to someone who has impaired mobility, it can be a major daily challenge and negatively impact their life and health in a number of ways. Some examples we repeatedly hear are people not staying hydrated or taking their medication on time simply because there is a distance between where they are and the items they need. Once we have those base capabilities, i.e. the ability to navigate around a home and move things within it, then the robot becomes a platform for a wider variety of applications.

What made you decide to develop assistive robots, and why are robots a good solution for seniors who want to live independently?

Supporting independent living has been seen as a massive opportunity in robotics for some time, but also as something off in the future. The turning point for me was watching my mother enter that stage in her life and seeing her transition to using a cane, then a walker, and eventually to a wheelchair. That made the problems very real for me. It also made things much clearer about how we could start addressing specific needs with the tools that are becoming available now.

In terms of why robots can be a good solution, the basic answer is the level of need is so overwhelming that even helping with “basic” tasks can make an appreciable difference in the quality of someone’s daily life. It’s also very much about giving individuals a degree of control back over their environment. That applies to seniors as well as others whose world starts getting more complex to manage as their abilities become more impaired.

What are the particular challenges of developing assistive robots, and how are you addressing them? Why do you think there aren’t more robotics startups in this space?

The setting (operating in homes and personal spaces) and the core purpose of the product (aiding a wide variety of individuals) bring a lot of complexity to any capability you want to build into an assistive robot. Our approach is to put as much structure as we can into the system to make it functional, affordable, understandable and reliable.

I think one of the reasons you don’t see more startups in the space is that a lot of roboticists want to skip ahead and do the fancy stuff, such as taking on human-level capabilities around things like manipulation. Those are very interesting research topics, but we think those are also very far away from being practical solutions you can productize for people to use in their homes.

How do you think assistive robots and human caregivers should work together?

The ideal scenario is allowing caregivers to focus more of their time on the high-touch, personal side of care. The robot can offload the more basic support tasks as well as extend the impact of the caregiver for the long hours of the day they can’t be with someone at their home. We see that applying to both paid care providers as well as the 40 million unpaid family members and friends that provide assistance.

The robot is really there as a tool, both for individuals in need and the people that help them. What’s promising in the research discussions we’ve had so far, is that even when a caregiver is present, giving control back to the individual for simple things can mean a lot in the relationship between them and the caregiver.

What should we look forward to from Labrador in 2020?

Our big goal in 2020 is to start placing the next version of the robot with individuals with different types of needs to let them experience it naturally in their own homes and provide feedback on what they like, what don’t like and how we can make it better. We are currently reaching out to companies in the healthcare and home health fields to participate in those studies and test specific applications related to their services. We plan to share more detail about those studies and the robot itself as we get further into 2020.


If you’re an organization (or individual) who wants to possibly try out Labrador’s prototype, the company encourages you to connect with them through their website. And as we learn more about what Labrador is up to, we’ll have updates for you, presumably in 2020.

[ Labrador Systems ]

* I just lost an hour of my life after finding out that you can play Where in the World is Carmen San Diego in your browser for free.

[original entry]

Agility Robotics Unveils Upgraded Digit Walking Robot

 − at 17:59, 14. Oct. 2019

Last time we saw Agility Robotics’ Digit biped, it was picking up a box from a Ford delivery van and autonomously dropping it off on a porch, while at the same time managing to not trip over stairs, grass, or small children. As a demo, it was pretty impressive, but of course there’s an enormous gap between making a video of a robot doing a successful autonomous delivery and letting that robot out into the semi-structured world and expecting it to reliably do a good job.

Agility Robotics is aware of this, of course, and over the last six months they’ve been making substantial improvements to Digit to make it more capable and robust. A new video posted today shows what’s new with the latest version of Digit—Digit v2.

We appreciate Agility Robotics foregoing music in the video, which lets us hear exactly what Digit sounds like in operation. The most noticeable changes are in Digit’s feet, torso, and arms, and I was particularly impressed to see Digit reposition the box on the table before grasping it to make sure that it could get a good grip. Otherwise, it’s hard to tell what’s new, so we asked Agility Robotics’ CEO Damion Shelton to get us up to speed.

IEEE Spectrum: Can you summarize the differences between Digit v1 and v2? We’re particularly interested in the new feet.

Damion Shelton: The feet now include a roll degree of freedom, so that Digit can resist lateral forces without needing to side step. This allows Digit v2 to balance on one foot statically, which Digit v1 and Cassie could not do. The larger foot also dramatically decreases load per unit area, for improved performance on very soft surfaces like sand.

The perception stack includes four Intel RealSense cameras used for obstacle detection and pick/place, plus the lidar. In Digit v1, the perception systems were brought up incrementally over time for development purposes. In Digit v2, all perception systems are active from the beginning and tied to a dedicated computer. The perception system is used for a number of additional things beyond manipulation, which we’ll start to show in the next few weeks.

The torso changes are a bit more behind-the-scenes. All of the electronics in it are now fully custom, thermally managed, and environmentally sealed. We’ve also included power and ethernet to a payload bay that can fit either a NUC or Jetson module (or other customer payload).

What exactly are we seeing in the video in terms of Digit’s autonomous capabilities?

At the moment this is a demonstration of shared autonomy. Picking and placing the box is fully autonomous. Balance and footstep placement are fully autonomous, but guidance and obstacle avoidance are under local teleop. It’s no longer a radio controller as in early videos; we’re not ready to reveal our current controller design but it’s a reasonably significant upgrade. This is v2 hardware, so there’s one more full version in development prior to the 2020 launch, which will expand the autonomy envelope significantly.

What are some unique features or capabilities of Digit v2 that might not be obvious from the video?

For those who’ve used Cassie robots, the power-up and power-down ergonomics are a lot more user friendly. Digit can be disassembled into carry-on luggage sized pieces (give or take) in under 5 minutes for easy transport. The battery charges in-situ using a normal laptop-style charger.

I’m curious about this “stompy” sort of gait that we see in Digit and many other bipedal robots—are there significant challenges or drawbacks to implementing a more human-like (and presumably quieter) heel-toe gait?

There are no drawbacks other than increased complexity in controls and foot design. With Digit v2, the larger surface area helps with the noise, and v2 has similar or better passive-dynamic performance as compared to Cassie or Digit v1. The foot design is brand new, and new behaviors like heel-toe are an active area of development.

How close is Digit v2 to a system that you’d be comfortable operating commercially?

We’re on track for a 2020 launch for Digit v3. Changes from v2 to v3 are mostly bug-fix in nature, with a few regulatory upgrades like full battery certification. Safety is a major concern for us, and we have launch customers that will be operating Digit in a safe environment, with a phased approach to relaxing operational constraints. Digit operates almost exclusively under force control (as with cobots more generally), but at the moment we’ll err on the side of caution during operation until we have the stats to back up safety and reliability. The legged robot industry has too much potential for us to screw it up by behaving irresponsibly.

It will be a while before Digit (or any other humanoid robot) is operating fully autonomously in crowds of people, but there are so many large market opportunities (think indoor factory/warehouse environments) to address prior to that point that we expect to mature the operational safety side of things well in advance of having saturated the more robot-tolerant markets.

[ Agility Robotics ]

[original entry]

Video Friday: This Humanoid Robot Will Serve You Ice Cream

 − at 19:50, 11. Oct. 2019

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

Northeast Robotics Colloquium – October 12, 2019 – Philadelphia, Pa., USA
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau

Let us know if you have suggestions for next week, and enjoy today’s videos.


What’s better than a robotics paper with “dynamic” in the title? A robotics paper with “highly dynamic” in the title. From Sangbae Kim’s lab at MIT, the latest exploits of Mini Cheetah:

Yes I’d very much like one please. Full paper at the link below.

[ Paper ] via [ MIT ]


A humanoid robot serving you ice cream—on his own ice cream bike: What a delicious vision!

Roboy ]


The Roomba “i” series and “s” series vacuums have just gotten an update that lets you set “keep out” zones, which is super useful. Tell your robot where not to go!

I feel bad, that Roomba was probably just hungry :(

[ iRobot ]


We wrote about Voliro’s tilt-rotor hexcopter a couple years ago, and now it’s off doing practical things, like spray painting a building pretty much the same color that it was before.

[ Voliro ]

Thanks Mina!


Here’s a clever approach for bin-picking problematic objects, like shiny things: Just grab a whole bunch, and then sort out what you need on a nice robot-friendly table.

It might take a little bit longer, but what do you care, you’re probably off sipping a cocktail with a little umbrella in it on a beach somewhere.

[ Harada Lab ]


A unique combination of the IRB 1200 and YuMi industrial robots that use vision, AI and deep learning to recognize and categorize trash for recycling.

[ ABB ]


Measuring glacial movements in-situ is a challenging, but necessary task to model glaciers and predict their future evolution. However, installing GPS stations on ice can be dangerous and expensive when not impossible in the presence of large crevasses. In this project, the ASL develops UAVs for dropping and recovering lightweight GPS stations over inaccessible glaciers to record the ice flow motion. This video shows the results of first tests performed at Gorner glacier, Switzerland, in July 2019.

[ EPFL ]


Turns out Tertills actually do a pretty great job fighting weeds.

Plus, they leave all those cute lil’ Tertill tracks.

[ Franklin Robotics ]


The online autonomous navigation and semantic mapping experiment presented [below] is conducted with the Cassie Blue bipedal robot at the University of Michigan. The sensors attached to the robot include an IMU, a 32-beam LiDAR and an RGB-D camera. The whole online process runs in real-time on a Jetson Xavier and a laptop with an i7 processor.

The resulting map is so precise that it looks like we are doing real-time SLAM (simultaneous localization and mapping). In fact, the map is based on dead-reckoning via the InvEKF.

[ GTSAM ] via [ University of Michigan ]


UBTECH has announced an upgraded version of its Meebot, which is 30 percent bigger and comes with more sensors and programmable eyes.

[ UBTECH ]


ABB’s research team will be working with medical staff, scientist and engineers to develop non-surgical medical robotics systems, including logistics and next-generation automated laboratory technologies. The team will develop robotics solutions that will help eliminate bottlenecks in laboratory work and address the global shortage of skilled medical staff.

[ ABB ]


In this video, Ian and Chris go through Misty’s SDK, discussing the languages we’ve included, the tools that make it easy for you to get started quickly, a quick rundown of how to run the skills you build, plus what’s ahead on the Misty SDK roadmap.

[ Misty Robotics ]


My guess is that this was not one of iRobot’s testing environments for the Roomba.

You know, that’s actually super impressive. And maybe if they threw one of the self-emptying Roombas in there, it would be a viable solution to the entire problem.

[ How Farms Work ]


Part of WeRobotics’ Flying Labs network, Panama Flying Labs is a local knowledge hub catalyzing social good and empowering local experts. Through training and workshops, demonstrations and missions, the Panama Flying Labs team leverages the power of drones, data, and AI to promote entrepreneurship, build local capacity, and confront the pressing social challenges faced by communities in Panama and across Central America.

[ Panama Flying Labs ]


Go on a virtual flythrough of the NIOSH Experimental Mine, one of two courses used in the recent DARPA Subterranean Challenge Tunnel Circuit Event held 15-22 August, 2019. The data used for this partial flythrough tour were collected using 3D LIDAR sensors similar to the sensors commonly used on autonomous mobile robots.

[ SubT ]


Special thanks to PBS, Mark Knobil, Joe Seamans and Stan Brandorff and many others who produced this program in 1991.

It features Reid Simmons (and his 1 year old son), David Wettergreen, Red Whittaker, Mac Macdonald, Omead Amidi, and other Field Robotics Center alumni building the planetary walker prototype called Ambler. The team gets ready for an important demo for NASA.

[ CMU RI ]


As art and technology merge, roboticist Madeline Gannon explores the frontiers of human-robot interaction across the arts, sciences and society, and explores what this could mean for the future.

[ Sonar+D ]


[original entry]

Watch Astrobee's First Autonomous Flight on the International Space Station

 − at 21:35, 09. Oct. 2019

NASA’s Astrobee robots have come a long, long way since we first met them at NASA Ames back in 2017. In fact, they’ve made it all the way to the International Space Station: Bumble, Honey, and Queen Bee are up there right now. While Honey and Queen Bee are still packed away in a case (and quite unhappy about it, I would imagine), Bumble has been buzzing around, getting used to its new home. To be ready to fly solo, all Bumble needed was some astronaut-assisted mapping of its environment, and last month, the little robotic cube finally embarked on its first fully autonomous ISS adventure.

We cut together the above video from about an hour’s worth of raw footage (without audio) of Astrobee testing, which took place in the Japanese Experiment Module (JEM), also known as Kibo, on the ISS on August 28. Astronaut Christina Koch had been working with roboticists at NASA Ames on earlier Astrobee start-up activities, which hadn’t gone as perfectly as everyone hoped they would, and was (understandably) excited that the robot was able to successfully fly itself though the JEM. Christina and another astronaut, off camera in the Harmony node attached to the JEM, do a little dance to celebrate (with what is now officially the “Astrobee Jig,” we’re told), and apparently Astrobee now has a standing invitation to join in on all future ISS dance parties. 

Astrobee’s goal for its first autonomous mission was to undock itself, follow a flight plan consisting of a list of waypoints and objectives that was uploaded to the robot from the ground, and then return to the dock. All of this was done without any direct intervention from the ground controllers or from the astronauts. As you can see in the video, Christina is mostly just following Bumble around as it does its thing, keeping out of the way of the navigation camera but otherwise just making sure the robot didn’t get into any trouble. 

How Astrobee flies itself

So far, the difficult part for Astrobee has been getting its localization to work robustly. While the robot does navigate visually, it’s dependent on preexisting maps rather than doing SLAM. Putting together those initial maps involved hand-carrying Bumble around the JEM to collect images, which were then processed offline (back on Earth) to identify features in the images and correlate them with locations to build up the map that Bumble uses to navigate.

With maps like these, you have to find the right mix of features to include for navigation to work optimally. If your maps are too rich in features, there will be too much data for your robot to manage, and if the maps are too sparse, the robot won’t be able to localize accurately. This was a little bit tricky for Astrobee, as deputy group lead Maria Bualat from the Intelligent Systems Division at NASA Ames explained to us:

It turned out that our maps needed to be richer. We tried to cull them down to make them fast and efficient, but we weren’t keeping enough features to enable the robot to localize robustly, so it would get lost a lot. During some of our earlier activities when we were trying to fly even basic motions, the robot would tend to drift as it would lose lock. This last activity that we had was great, because it was our first time using the more enriched map, and the localization worked really well. It was kind of nice because [Christina] saw us through those struggles—she saw how tough it was to get the robot to fly.

Besides this little bit of software optimization, Bualat says that Astrobee has been working well, without any other software issues or hardware issues of any kind. This is impressive for any robot, and especially so for a robot that was developed entirely on the ground and is now being used in space. And as for the astronauts whose job it is to test Astrobee, it sounds like they’re actually having some fun with it. There was a bit of concern initially that Astrobee’s impellers would be overly loud, but that might be a feature rather than a bug, as Bualat explains: “We’ve asked them if they found it noisy or annoying, and they said no—in fact, they said that you can certainly hear it, but they actually liked it because it means that Astrobee can’t sneak up on them.”

Astrobee will be continuing its commissioning activities over the next few months, which includes tuning Bumble so that it can fly as robustly as possible. For example, Astrobee needs to be able to navigate if an astronaut moves in front of its navigation camera, blocking some of the view. Bumble will then get its perching arm installed and tested, after which the goal is to start working with some of the science payloads, like a gecko gripper, a RFID tracker, and a microphone array, which you can read more about here and here. Honey and Queen still need to go through their own start-up tests and calibrations, and Maria Bualat says the goal is to have multiple Astrobees buzzing around the ISS together “not too far in the future.”

[ Astrobee ]

[original entry]

From Mainframes to PCs: What Robot Startups Can Learn From the Computer Revolution

 − at 23:18, 08. Oct. 2019

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

Autonomous robots are coming around slowly. We already got autonomous vacuum cleaners, autonomous lawn mowers, toys that bleep and blink, and (maybe) soon autonomous cars. Yet, generation after generation, we keep waiting for the robots that we all know from movies and TV shows. Instead, businesses seem to get farther and farther away from the robots that are able to do a large variety of tasks using general-purpose, human anatomy-inspired hardware.

Although these are the droids we have been looking for, anything that came close, such as Willow Garage’s PR2 or Rethink RoboticsBaxter has bitten the dust. With building a robotic company being particularly hard, compounding business risk with technological risk, the trend goes from selling robots to selling actual services like mowing your lawn, provide taxi rides, fulfilling retail orders, or picking strawberries by the pound. Unfortunately for fans of R2-D2 and C-3PO, these kind of business models emphasize specialized, room- or fridge-sized hardware that is optimized for one very specific task, but does not contribute to a general-purpose robotic platform.

We have actually seen something very similar in the personal computer (PC) industry. In the 1950s, even though computers could be as big as an entire room and were only available to a selected few, the public already had a good idea of what computers would look like. A long list of fictional computers started to populate mainstream entertainment during that time. In a 1962 New York Times article titled “Pocket Computer to Replace Shopping List,” visionary scientist John Mauchly stated that “there is no reason to suppose the average boy or girl cannot be master of a personal computer.”

In 1968, Douglas Engelbart gave us the “mother of all demos,” browsing hypertext on a graphical screen and a mouse, and other ideas that have become standard only decades later. Now that we have finally seen all of this, it might be helpful to examine what actually enabled the computing revolution to learn where robotics is really at and what we need to do next.

The parallels between computers and robots

In the 1970s, mainframes were about to be replaced by the emerging class of mini-computers, fridge-sized devices that cost less than US $25,000 ($165,000 in 2019 dollars). These computers did not use punch-cards, but could be programmed in Fortran and BASIC, dramatically expanding the ease with which potential applications could be created. Yet it was still unclear whether mini-computers could ever replace big mainframes in applications that require fast and efficient processing of large amounts of data, let alone enter every living room. This is very similar to the robotics industry right now, where large-scale factory robots (mainframes) that have existed since the 1960s are seeing competition from a growing industry of collaborative robots that can safely work next to humans and can easily be installed and programmed (minicomputers). As in the ’70s, applications for these devices that reach system prices comparable to that of a luxury car are quite limited, and it is hard to see how they could ever become a consumer product. 

Yet, as in the computer industry, successful architectures are quickly being cloned, driving prices down, and entirely new approaches on how to construct or program robotic arms are sprouting left and right. Arm makers are joined by manufacturers of autonomous carts, robotic grippers, and sensors. These components can be combined, paving the way for standard general purpose platforms that follow the model of the IBM PC, which built a capable, open architecture relying as much on commodity parts as possible.

General purpose robotic systems have not been successful for similar reasons that general purpose, also known as “personal,” computers took decades to emerge. Mainframes were custom-built for each application, while typewriters got smarter and smarter, not really leaving room for general purpose computers in between. Indeed, given the cost of hardware and the relatively little abilities of today’s autonomous robots, it is almost always smarter to build a special purpose machine than trying to make a collaborative mobile manipulator smart.

A current example is e-commerce grocery fulfillment. The current trend is to reserve underutilized parts of a brick-and-mortar store for a micro-fulfillment center that stores goods in little crates with an automated retrieval system and a (human) picker. A number of startups like Alert Innovation, Fabric, Ocado Technology, TakeOff Technologies, and Tompkins Robotics, to just name a few, have raised hundreds of millions of venture capital recently to build mainframe equivalents of robotic fulfillment centers. This is in contrast with a robotic picker, which would drive through the aisles to restock and pick from shelves. Such a robotic store clerk would come much closer to our vision of a general purpose robot, but would require many copies of itself that crowd the aisles to churn out hundreds of orders per hour as a microwarehouse could. Although eventually more efficient, the margins in retail are already low and make it unlikely that this industry will produce the technological jump that we need to get friendly C-3POs manning the aisles. 

Mainframes were also attacked from the bottom. Fascination with the new digital technology has led to a hobbyist movement to create microcomputers that were sold via mail order or at RadioShack. Initially, a large number of small businesses was selling tens, at most hundreds, of devices, usually as a kit and with wooden enclosures. This trend culminated into the “1977 Trinity” in the form of the Apple II, the Commodore PET, and the Tandy TRS-80, complete computers that were sold for prices around $2500 (TRS) to $5000 (Apple) in today’s dollars. The main application of these computers was their programmability (in BASIC), which would enable consumers to “learn to chart your biorhythms, balance your checking account, or even control your home environment,” according to an original Apple advertisement. Similarly, there exists a myriad of gadgets that explore different aspects of robotics such as mobility, manipulation, and entertainment.

As in the fledgling personal computing industry, the advertised functionality was at best a model of the real deal. A now-famous milestone in entertainment robotics was the original Sony’s Aibo, a robotic dog that was advertised to have many properties that a real dog has such as develop its own personality, play with a toy, and interact with its owner. Released in 1999, and re-launched in 2018, the platform has a solid following among hobbyists and academics who like its programmability, but probably only very few users who accept the device as a pet stand-in.

There also exist countless “build-your-own-robotic-arm” kits. One of the more successful examples is the uArm, which sells for around $800, and is advertised to perform pick and place, assembly, 3D printing, laser engraving, and many other things that sound like high value applications. Using compelling videos of the robot actually doing these things in a constrained environment has led to two successful crowd-funding campaigns, and have established the robot as a successful educational tool.

Finally, there exist platforms that allow hobbyist programmers to explore mobility to construct robots that patrol your house, deliver items, or provide their users with telepresence abilities. An example of that is the Misty II. Much like with the original Apple II, there remains a disconnect between the price of the hardware and the fidelity of the applications that were available. 

For computers, this disconnect began to disappear with the invention of the first electronic spreadsheet software VisiCalc that spun out of Harvard in 1979 and prompted many people to buy an entire microcomputer just to run the program. VisiCalc was soon joined by WordStar, a word processing application, that sold for close to $2000 in today’s dollars. WordStar, too, would entice many people to buy the entire hardware just to use the software. The two programs are early examples of what became known as “killer application.”

With factory automation being mature, and robots with the price tag of a minicomputer being capable of driving around and autonomously carrying out many manipulation tasks, the robotics industry is somewhere where the PC industry was between 1973—the release of the Xerox Alto, the first computer with a graphical user interface, mouse, and special software—and 1979—when microcomputers in the under $5000 category began to take off.

Killer apps for robots

So what would it take for robotics to continue to advance like computers did? The market itself already has done a good job distilling what the possible killer apps are. VCs and customers alike push companies who have set out with lofty goals to reduce their offering to a simple value proposition. As a result, companies that started at opposite ends often converge to mirror images of each other that offer very similar autonomous carts, (bin) picking, palletizing, depalletizing, or sorting solutions. Each of these companies usually serves a single application to a single vertical—for example bin-picking clothes, transporting warehouse goods, or picking strawberries by the pound. They are trying to prove that their specific technology works without spreading themselves too thin.

Very few of these companies have really taken off. One example is Kiva Systems, which turned into the logistic robotics division of Amazon. Kiva and others are structured around sound value propositions that are grounded in well-known user needs. As these solutions are very specialized, however, it is unlikely that they result into any economies of scale of the same magnitude that early computer users who bought both a spreadsheet and a word processor application for their expensive minicomputer could enjoy. What would make these robotic solutions more interesting is when functionality becomes stackable. Instead of just being able to do bin picking, palletizing, and transportation with the same hardware, these three skills could be combined to model entire processes.

A skill that is yet little addressed by startups and is historically owned by the mainframe equivalent of robotics is assembly of simple mechatronic devices. The ability to assemble mechatronic parts is equivalent to other tasks such as changing a light bulb, changing the batteries in a remote control, or tending machines like a lever-based espresso machine. These tasks would involve the autonomous execution of complete workflows possible using a single machine, eventually leading to an explosion of industrial productivity across all sectors.  For example, picking up an item from a bin, arranging it on the robot, moving it elsewhere, and placing it into a shelf or a machine is a process that equally applies to a manufacturing environment, a retail store, or someone’s kitchen.

Even though many of the above applications are becoming possible, it is still very hard to get a platform off the ground without added components that provide “killer app” value of their own. Interesting examples are Rethink Robotics or the Robot Operating System (ROS). Rethink Robotics’ Baxter and Sawyer robots pioneered a great user experience (like the 1973 Xerox Alto, really the first PC), but its applications were difficult to extend beyond simple pick-and-place and palletizing and depalletizing items.

ROS pioneered interprocess communication software that was adapted to robotic needs (multiple computers, different programming languages) and the idea of software modularity in robotics, but—in the absence of a common hardware platform—hasn’t yet delivered a single application, e.g. for navigation, path planning, or grasping, that performs beyond research-grade demonstration level and won’t get discarded once developers turn to production systems. At the same time, an increasing number of robotic devices, such as robot arms or 3D perception systems that offer intelligent functionality, provide other ways to wire them together that do not require an intermediary computer, while keeping close control over the real-time aspects of their hardware.

At my company, Robotic Materials Inc., we have made strides to identify a few applications such as bin picking and assembly, making them configurable with a single click by combining machine learning and optimization with an intuitive user interface. Here, users can define object classes and how to grasp them using a web browser, which then appear as first-class objects in a robot-specific graphical programming language. We have also done this for assembly, allowing users to stack perception-based picking and force-based assembly primitives by simply dragging and dropping appropriate commands together.

While such an approach might answer the question of a killer app for robots priced in the “minicomputer” range, it is unclear how killer app-type value can be generated with robots in the less-than-$5000 category. A possible answer is two-fold: First, with low-cost arms, mobility platforms, and entertainment devices continuously improving, a confluence of technology readiness and user innovation, like with the Apple II and VisiCalc, will eventually happen. For example, there is not much innovation needed to turn Misty into a home security system; the uArm into a low-cost bin-picking system; or an Aibo-like device into a therapeutic system for the elderly or children with autism.

Second, robots and their components have to become dramatically cheaper. Indeed, computers have seen an exponential reduction in price accompanied by an exponential increase in computational power, thanks in great part to Moore’s Law. This development has helped robotics too, allowing us to reach breakthroughs in mobility and manipulation due to the ability to process massive amounts of image and depth data in real-time, and we can expect it to continue to do so.

Is there a Moore’s Law for robots?

One might ask, however, how a similar dynamics might be possible for robots as a whole, including all their motors and gears, and what a “Moore’s Law” would look like for the robotics industry. Here, it helps to remember that the perpetuation of  Moore’s Law is not the reason, but the result of the PC revolution. Indeed, the first killer apps for bookkeeping, editing, and gaming were so good that they unleashed tremendous consumer demand, beating the benchmark on what was thought to be physically possible over and over again. (I vividly remember 56 kbps to be the absolute maximum data rate for copper phone lines until DSL appeared.)

That these economies of scale are also applicable to mechatronics is impressively demonstrated by the car industry. A good example is the 2020 Prius Prime, a highly computerized plug-in hybrid, that is available for one third of the cost of my company’s GPR-1 mobile manipulator while being orders of magnitude more complex, sporting an electrical motor, a combustion engine, and a myriad of sensors and computers. It is therefore very well conceivable to produce a mobile manipulator that retails at one tenth of the cost of a modern car, once robotics enjoy similar mass-market appeal. Given that these robots are part of the equation, actively lowering cost of production, this might happen as fast as never before in the history of industrialization. 

There is one more driver that might make robots exponentially more capable: the cloud. Once a general purpose robot has learned or was programmed with a new skill, it could share it with every other robot. At some point, a grocer who buys a robot could assume that it already knows how to recognize and handle 99 percent of the retail items in the store. Likewise, a manufacturer can assume that the robot can handle and assemble every item available from McMaster-Carr and Misumi. Finally, families could expect a robot to know every kitchen item that Ikea and Pottery Barn is selling. Sounds like a labor intense problem, but probably more manageable than collecting footage for Google’s Street View using cars, tricycles, and snowmobiles, among other vehicles.

Strategies for robot startups

While we are waiting for these two trends—better and better applications and hardware with decreasing cost—to converge, we as a community have to keep exploring what the canonical robotic applications beyond mobility, bin picking, palletizing, depalletizing, and assembly are. We must also continue to solve the fundamental challenges that stand in the way of making these solutions truly general and robust.

For both questions, it might help to look at the strategies that have been critical in the development of the personal computer, which might equally well apply to robotics:

  • Start with a solution to a problem your customers have. Unfortunately, their problem is almost never that they need your sensor, widget, or piece of code, but something that already costs them money or negatively affects them in some other way. Example: There are many more people who had a problem calculating their taxes (and wanted to buy VisiCalc) than writing their own solution in BASIC.

  • Build as little of your own hardware as necessary. Your business model should be stronger than the margin you can make on the hardware. Why taking the risk? Example: Why build your own typewriter if you can write the best typewriting application that makes it worth buying a computer just for that? 

  • If your goal is a platform, make sure it comes with a killer application, which alone justifies the platform cost. Example: Microcomputer companies came and went until the “1977 Trinity” intersected with the killer apps spreadsheet and word processors. Corollary: You can also get lucky. 

  • Use an open architecture, which creates an ecosystem where others compete on creating better components and peripherals, while allowing others to integrate your solution into their vertical and stack it with other devices. Example: Both the Apple II and the IBM PC were completely open architectures, enabling many clones, thereby growing the user and developer base. 

It’s worthwhile pursuing this. With most business processes already being digitized, general purpose robots will allow us to fill in gaps in mobility and manipulation, increasing productivity at levels only limited by the amount of resources and energy that are available, possibly creating a utopia in which creativity becomes the ultimate currency. Maybe we’ll even get R2-D2.

Nikolaus Correll is an associate professor of computer science at the University of Colorado at Boulder where he works on mobile manipulation and other robotics applications. He’s co-founder and CTO of Robotic Materials Inc., which is supported by the National Science Foundation and the National Institute of Standards and Technology via their Small Business Innovative Research (SBIR) programs.

[original entry]

#295: inVia Robotics: Product-Picking Robots for the Warehouse, with Rand Voorhies

 − at 09:00, 07. Oct. 2019

In this episode, Lauren Klein speaks with Dr. Rand Voorhies, co-founder and CTO of inVia Robotics. In a world where consumers expect fast home delivery of a variety of goods, inVia’s mission is to help warehouse workers package diverse sets of products quickly using a system of autonomous mobile robots. Voorhies describes how inVia’s robots operate to pick and deliver boxes or totes of products to and from people workers in a warehouse environment eliminating the need for people to walk throughout the warehouse, and how the actions of the robots are optimized.

(more…)

[original entry]