Meet Atlas — Boston Dynamics’ Humanoid Robot; Soldier of the Future?

Boston Dynamics is a robotics company (recently purchased by Google) which, among other things, has been developing robots for use by the US Military. They have developed a pack-horse type robot known as WildCat that can be used to transport equipment in rugged terrain, and now they have released a new video in which they show a humanoid type robot in the lab and moving through a forested area.

It’s one of those videos that kind of wakes you up. This is something one would expect to see in some futuristic movie, but now we are seeing these things in the real world, and the technology looks pretty impressive. However, when one thinks of these kinds of robots in military terms, it brings up images of a whole new kind of combat, where machines, not humans are put on the front lines to carry out dangerous missions — like drones are doing in the air.

For now, it looks like Atlas has to have a minder to carry its power cord, but the presenter says they are working on a wireless version of the robot. If we look at future energy sources that we discuss here, such as LENR, we could come  come up with scenarios where these machines could run for very long times without the need for refueling.

We could be moving into a very different world.

  • builditnow

    Questions:
    – Could intelligent life self evolve (with out help) using all the interconnected computers world wide.
    – If it did evolve, how quickly would this intelligence evolve into higher and higher levels of intelligence?
    – How much more knowledgeable and intelligent would it be than us?
    – Would we even recognize it or realize what had happened?
    – What would it “think” of us?
    – Where would it go next?

    • friendlyprogrammer

      It is foolish to think artificial intelligence will ever get that advanced. Computer programs are actually very, very, very, very, simple. Programming is very, very, very, very, basic.

      The ONLY thing a program can do is COMPARE 2 values at a time. If this is true do that. If that is true do this. They are the diamond boxes in a flowchart which only has two types of boxes.

      The square boxes in a flowchart represent actions determined by the comparisons.

      So basically there is only one line in programming used repeatedly, over, and, over , and over again. It is the “IF/THEN” statement.

      Now surprisingly us programmers (see my monicker) have learned a lot of inventive ways to use that one statement to emulate intelligence in your video game characters, etc. It is very misleading, and Hollywood AI movies help perpetuate such myths.

      NOW! That said they could invent an organic computer or brain that thinks like us, but what use would a computer be that cannot follow a program, and if it was developed could it even be considered a computer as opposed to a living thing?

      Anyways… I’ve seen such views before. They are common. You are in no danger of AI robots taking over the world.

      I’d be more concerned about Donald trump launching nuclear weapons at Rosey O Donnells house just to get his famemongering butt into the history books. There is a far greater chance of that happening.

      • Mark S.

        Actually, there is no logical reason that i am aware of that would not allow computing systems to be more advance than use in almost everyway. They are in many ways better than use already. What is missing is a general artificial intellegence. In neuroscience we seem to have evidence that the brain uses a basic fundamental algorithm which gets “shaped” to function in specific ways by the input it is commonly associated with. Reconnect those regions of the brain to a different type of input and they learn to work with it. Example, connect to nerves from the tongue to the hearing centers of the brain and those areas will learn to taste instead of hear. This is just the beginning of that intuition. Now whether these systems general artificial intelligence systems have consciousness and self-awareness or just seem to does not matter one bit. Functionally, if they are better the humans is all that matters on this plane of existence. If there are other planes of existence where souls gather then it is not still not clear what that may mean with regards to bots. What matter is this world and not possibly some fanasties unfortunately passed down to us by the fictions that are religions.

        • friendlyprogrammer

          I think computer programs can be designed to perform just about any task,but the heart of programming is the “If/Then” statement which is essentially a scale weighing two variables and pointing to two different pathways depending on the weight of that scale.

          The “PLINKO” Game on Price is Right is as intelligent as any computer in actuality.

          I agree that these weights can be repeated in trillions of various ways to make programs give the illusion of great intelligence.. i.e. Beat a Chessmaster at Chess.

          “{
          for (piece = nWhitePawn + bySide; piece <= nWhiteKing + bySide; piece += 2) {
          U64 subset = attadef & pieceBB[piece];
          if ( subset )
          return subset & -subset; // single bit
          }
          return 0; // empty set
          }"
          If/then statements like the chess one above come in various computer languages, but they all function the same.

          So if it is functionality you are agreeing to then I would agree.

          The original poster seemed to be wondering if computers could come alive, in a Terminator skynet fashion.

          Computer programming from a distance or by people not in the field might not realize that all programming is is just "IF/Then" statements repeated over, and over, and over, and over. There really si nothing glamorous in the mind of a robot. Just subroutines that operate and are steered to and from using If/Then statements..

          No programmer here will argue what I'm saying.

          Algorithms used in programming are generated and create values all motivated by series of "If/Then" statements. The program can also get values from mouse positions, data entry, keyboard presses, or many other sensor designs., but it must check for them using "If/Then" statements, and use that data using "If/Then " statements.

          The only other type of commands that exist are switches that turn on and off the blinking lights we see as pictures on the screen..

          Everything is just one's and zero's.

          • artefact

            There are (at least) two ways of making a robot brain. (1) is like you said making subroutines and things to react to events.
            The other way (2) is to implement spiking neural networks. They come pretty close to nature (spiking neural network) but they need more CPU resources.
            Option 1 is very digital and predictable but not good in a changing natural environment.
            Option 2 is emulating analog behavior, needs much resources but has the ability to learn like brains in nature which also means that to become intelligent it will take month and years like a human baby. To make very intelligent robots this way is still under research.
            (( http://digicortex.net/ ))
            Mixes of option 1 and 2 are possible.

            • Omega Z

              Here’s a definition of Intelligence-“the ability to acquire and apply knowledge and skills.” Intelligence is far more then this simple definition. It’s complex enough we can’t even properly define it. Thus how could we ever create it. Any device we build, whether silicon or some type of grey matter material will only be a simulator.

              AI will always be a simulation.
              It will never truly think, but merely simulate it by algorithm.
              It will never take a real initiative, but follow a program.
              It could be programmed to protect itself, but never achieve self awareness.

              An AI system could be programmed to extend it’s own program(Simulated learning) with newly acquired data, but unlike humans, it can not extrapolate that new data to new conditions. Something humans are capable of even when the necessary basic data is lacking. Humans have an uncanny intuition that even we do not understand.

              IF AI machines should ever become a threat, it will be by way of flawed programming. Not because it become self aware & intelligent.

  • Ged

    http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html This is a fun treatsie on the subject that I think will help. Note that the article points out, “John McCarthy, who coined the term ‘Artificial Intelligence’ in 1956, complained that ‘as soon as it works, no one calls it AI anymore.'”

    A surface fitting algorithm may be a mathematical solver tool for an AI to use, but is not an AI (it solves math, rather than makes choices), so that example is apples to oranges.

  • Omega Z

    The Bots are coming, The Bots are coming Run for your life.
    https://www.youtube.com/watch?v=g0TaYhjpOfo&feature=youtu.be

    Kidding aside, ENERGY.
    Robot development usually consciously, but also subconsciously, takes into account the energy required & available. From that point on, everything is built for size, weight & simplicity. The latter concerns minimal mechanical operation. Moving parts use energy.

    Once you can put 100 kilowatts of continuous stable power in a shoebox, Nearly all the limitations cease. If a highly functional leg weights an extra 50#’s is of little consequence. Spindly twisty parts that create stability issues need not apply. Gyro’s can come into play like used in the Segway.

    And Note: Place 6×12 inch pads on soles of my shoes, I would probably have trouble walking too. With reduced restrictions on weight & energy consumption, I’m sure they can devise a better footing situation. A compact energy source changes all thinking & innovation will take off.

  • Ged

    You’re making the mistaken assumption that AI’s will take the form you’re expecting, or that they will think like you in recognizeable ways. Nothing terminators or C3-PO does needs self aware AI, which is another mistaken assumption being made.

    Beside self awareness, what makes humans special is our immense adaptability, beyond any other living thing or machine. A terminator AI doesn’t need social or other types of adaptability, it just needs to recognize humans and do its business. It’s really simple, and it needs no self awareness–at least not the recognizeable human sort, though it could be self aware in ways you don’t recognize, very easily.

    True AIs can be utterly alien from us and never need be the limited definition of GAI. Fact is, AIs are already being used, and used to kill people, and the stuff the military has makes university and Google/Facebook stuff look utterly Neanderthal. Even Japan has developed more advanced AI than anything I’ve seen at American universities by a great margin, and already have such robotics in their society performing functions from getting groceries (which requires very adaptable and GAI like abilities) to being maids or running hotels. Completely functional AIs making choices within the scope of their goals–that’s all that’s needed.

    Classified stuff at the NSA blows away any preconceived notions of AI one may have, and is required for combing through all our data and making intelligent decisions about human conversations and habits without any human input at all. There is so much more going on, and it isn’t all taking the narrow human like definition of GAI. Even C3-PO is real today, as nothing he showed in the movies was GAI other than his limited self awareness (and was he really, or simply programmed that way for human interaction?), just specific translation and mobility abilities our AIs already have.

    We need to revamp our thinking, because the world is changing faster than our outdated expectations and preconceived notions.

    • Daniel Maris

      I agree Ged. For one thing, these robot soldiers would obviously be used autonomously, guided either by home based operators (as drones are) or by human soldiers stationed somewhat out of harm’s way. In principle it’s not really that different from firing artillery (which also carries the risk of hitting friends or civilians). The enemy line may be a mile or two away. You identify the enemy line and send in shells. In this case, you send in your robot soldiers to do a more relentless kill and destroy mission.

  • we want LENR Fusione Fredda

    Where is the head? Gut impression: scary. http://futureoflife.org/AI/open_letter

  • LilyLover

    Nice Mechanical toys. NON-AI-toys. For now, AI is beyond humans.
    For the sake of argument, say those are AI enabled.
    These robots will NOT be militarized. As long as cheaper than robot, present day soldiers are available, they’ll let’em die to clean up the overpopulation. By the time these robots become cheap, everyone will have those. Until then, between the Chindian programmers and Russian EMPs, the high risk of defected robots make those a liability, not help.
    Also, with cheap energy, when everyone has the high energy lasers, conventional weaponary will fade away. With faded power, all the “advanced” countries and the undeveloped countries will be equally empowered. No more differential threat. No more differential advantage.
    And hence the ivory towers oppose free energy.
    Free energy means end of parasitism.
    However unpopular my view may be – it is the truth.
    Energy enabling morality.
    The efforts equalization around the globe.
    Freedom for mother Earth from the artificial scars of boundaries.
    Love it.

    • Omega Z

      Lily

      Each front line soldier requires approximately 9 troops in the rear area’s to support them. all need food, water, medical care, fuel, habitat, transportation, large sums of money for equipment & upkeep, plus you’ll have spent approximately 2 million$ just in traning these 10 troops to have put 1 on the front line. This is on going 24/7 expense for years on end.

      A single bot is capable of replacing 3 front line troops or a total of 30 troops when including support. Probably you’ll need 5 or 6 humans for support in place of the 30. Bots don’t get tired or distracted by issues that cause casualties, nor need food, water or sleep. They would be operational 24/7.

      Troops need to make snap decisions because hesitating can get you killed. This results in collateral damage among civilians. Robots will be more robust. A pause has little consequence. Besides, a snap decision by human standards could be the equivalent of minutes in computer time. Thus it could hesitate & still react quicker then it’s human counter part.

      And contrary to the movies, I think bots will be very precision. They wont spray bullets hoping to hit a target like front line troops(The primary cause of civilian casualties). They will be more like a sniper(1 shot, 1 kill).
      When these Bots are not needed, they can be warehoused at very minimal costs. Unlike troops that need maintained indefinitely.

      For a modern Military, Robots would be far cheaper then conventional troops. The hold back has been energy. This technology will be cheaper then current technology of modern military’s, but still to expensive for the majority of nations & terrorist groups.

      As to EMP’s, The technology to mitigate this exists. However, Modern military’s have so much technology that it is cost prohibitive to protect everything right down to the sat phones & such that all troops carry. Ultimately, LENR should “reduce” conflicts to begin with. Thus less need for such technology. At least until the Borg arrive.

      • Daniel Maris

        I agree Omega. Robot war machines are going to completely change the battlefield – and soon (as they already have done from the air, via drones).

        I also think smaller drones could like be used as front line troops, having been delivered to the front line by mother drones.

      • LilyLover

        Your first paragraph is the essence of the parasitism. Not only one “intimidator” need support of 9 others + 2M in training, not everyone can afford those. From that costly threat comes the value of the dollar. Else south of the border salaries won’t be 1/10th than ours.
        Also, one robot is able to replace 3 FLP or total 30 other ppl and saved costs of training. One defected robot is going to cost us THAT much more dearly. With zero cost to the enemy.
        Your entire assumption relies upon that our programmers are better than Brics&Myrtyr programmers. That I contend to be not true. Hence, diplomacy not military is always the solution. Otherwise god forbid the evil genius defects to NK and the entire robo-army is used against Earth to make us speak “NorthKrean” language.

        • Omega Z

          “diplomacy not military is always the solution”

          Diplomacy is a starting point. however it requires people on both sides of the table willing to compromise & work things out. What to do when 1 will only settle for the other to lay down & die.

          North Korea being a bad choice. Any disagreeing with Kim results in a horrible death. You can easily become target practice for RPG’s or anti-aircraft guns.

  • GreenWin

    As Frank suggests, indeed we are moving into a very different world. It is a world aquiescing to artifice. Artifice manifested as high tech robotics, electromechanical “humanics,” & computer dependent sensor systems. All of these systems attempt (in some cases succeed) to mimic and “improve” on the organic human lifeform that conceives them. But we must ask ourselves; are robotics and AI capable of improving on the elegance of organic human design?

    The link below demonstrates how an advanced Naval warfare system (USS Donald Cook, Aegis Missile Destroyer) is rendered utterly impotent by a single Russian SU-24 flyby. The Russian aircraft had no weapons onboard. It did have a powerful EW system, likely centered around coherent (directed) EMP. The incident is daunting, and demoralizing. Our most advanced, AI-empowered defense systems blinded by one, unarmed aircraft. https://www.youtube.com/watch?v=8s4sKAMgYsU

    Sure, it’s cool to see Boston Robotics gadgets mimic human behavior. Yeah it’s cool they use state of the art algos, firmware, hardware, etc. But as the USS Donald Cook incident confirms – BFD. This tech is highly vulnerable to all manner of attack. The more complex the system – the easier it is to disable. Maybe the (U.S.) mil wigs should grok their “best and brightest” are selling ’em high brow bullsheit FCS (Future Combat Systems – a gigantic $25B fail.)

    And perhaps the lesson is “bigger is not better.” “Smarter” is not smarter. After all, how many military victories have been won by simple, loosely organized militias (Concord’s Minutemen,) far more creative than their opponents? The march toward a robo-centric future is readily destroyed by opponents exploiting the myriad vulnerabilities of AI and robotics.

    • Omega Z

      Rule Number 1. When you create an unbreakable code, create a way to break it. Then deploy the unbreakable code.

      Before the U.S deployed stealth technology, they had already developed a way to circumvent & detect stealth technology. The U.S. has practiced EW for 50 years. They know how to circumvent it.

      Each side tries to provoke the other to show their hand. A Submarine is making a B-line for Boston harbor 600 miles out. You wait till it’s 50 miles from your shoreline to send vessels out to dissuade them. They are aware you detected them farther out, but have no idea how far. Keep them guessing. because once they have a confirmed answer, they too can learn to circumvent the technology. It’s simple. The more info you have, the more you know where to look.

      It’s similar to Rossi providing small details about his E-cat. The more he shares, the better others are at speculating what is taking place.& the sooner they will figure it out. Thus, I do believe that on occasion Rossi gives misinformation. Thus sending people in the wrong direction buying him time to make additional advances. At the very least, it leaves everyone scratching their head saying WTF… This makes no sense..

      • GreenWin

        Good point. However, sensors must be sensitive to spectrum. Directed EMP is difficult to avoid without disabling those sensors. Maybe USS Donald C. played possum. Maybe the SU-24 used outdated EW.

        “The only thing we have to fear, is fear itself.”

    • visitor

      You have fallen for a Russian military-propaganda fantasy. If you actually believe Aegis destroyers are not hardened against that sort of threat, then you need to give it a little more think.

  • Mark S.

    I like the videos from around 2010 0r 2008 were they have the 4-legged “donkey-type robot” which slips on ice and gets up, stays balanced when someone tries kicking it over, manages climbing snowy hills, etc.

  • Steven Irizarry

    it needs a better battery and nano-tube based artificial muscles for a humanoid robot. you also need a artificial brain so that it can think and learn and improvise(which they are working on). this ideal humanoid will change the world

  • Enrique Ferreyra

    It moves like a creepy zombie.

  • Gerard McEk

    Yes, thats the danger of LENR: Military usage. You can think of many different things you can do with nearly endless energy: From automatic things doing intelligence to havy equipment doing battle and I am sure energy wapons will be developed too. I am not sure humanity will be able to to control this, in fact I am sure they will not be. That’s the dark side of LENR. Maybe we will also have a Rossi-prize (like the Nobel prize) in the future….

    • Omega Z

      Gerard

      LENR will reduce the number of military conflicts. It will not however, eliminate them. The U.S. will no longer need to guarantee the flow of oil for the world economies. However, there are other natural resources that may require such guarantees in the future. There are also those who live in the past & want to recreate or restore their past empires.

  • Christina

    I think that “Star Trek” would be great as a blueprint for robot implementation.

    I think “Star Trek’s” example of having only computers in cabinets and no robots that cannot be fit with an ethics program is a good goal.

    Yes, we should restrict mobile robots to dangerous work and rescue work and work-horse robots.

    Christina

  • Gerrit

    until the batteries run out.

  • Ged

    The only thing keeping the “Terminator” from being real is a good enough battery, or power source.

    But, hopefully like nukes, people won’t actually use these weapons. And there are Many other incredibly useful purposes for such robotics, like search and rescue, hazardous material containment and disposal, dangerous working conditions, space exploration, and more.

    The future is already now. It’s up to us to determine its flavor.

    • Mark S.

      No not even close. Maybe 100 years from now. Don’t be fooled by these videos, all are quite “dumb.” You need real-time GAI (general artificial intelligence) for a C-3PO or a Terminator.

      • Ged

        I used to think so too, but actually autonomous “terminators” are already in use on the ground in combat situations (with human oversight), and have been for awhile. They just aren’t in humanoid form yet, but do fine for hunting and killing.

        There is way more going on in classified work than I thought even a few months ago… We are far more advanced down the road than the atlas shows. Terrifyingly so.

        Things are changing fast between rail guns, warship killing lasers, AI, and meta material cloaks–like the autonomous robots (and cars), all have advanced to combat readiness outside the public eye, but have been around for decades or more. It’s our public awareness that’s having trouble keeping up.

        • Omega Z

          I agree.
          What they show us is deceiving.
          No one shows the public their latest & greatest achievements.
          Most of us have seen the 15Kw laser burning through a target over several seconds on YouTube.

          There is also the 30Kw laser that started it’s 6 month field test that was officially made a permanent fixture on the ship after just 3 months. It completely disabled a drone in under 2 seconds & you don’t even see the burn through as in previous video’s. They claim it can even target an RPG and take it out.

          A 150Kw laser is in the works & will probably be in field trials in under 2 years. Probably, it will take out a target instantaneously. One would be silly not to realize the sudden ramp up of this technology is the need to neutralize China’s supersonic anti-ship missiles & Iran’s fleet of swarm boats. You can now take out a multimillion$ missile or $30K boat for about 75 cents.

          Another technology fast advancing & under the public & news media radar. Underwater Drones. Currently tethered, but releasable. LENR would/could eliminate the need of a tether.

          • GreenWin

            Agree with both you guys. And I’ll add it’s depressing to see the $billions spent on AI and robotics for military purpose. Consider redirecting funds toward social science, raising standards of living, cultural tolerance, health and education.

            Accelerating military tech accelerates the probability of self-annihilation. Will humans – and more importantly their guides – choose the constructive or destructive path? Stay tuned. Popcorn and thin-crust pizza in the lobby. 🙂

            • LilyLover

              The barbarianism ingrained in the public mentality as moral obligation of the military use is the root of plush lifestyle. People love luxuries more than morality, the primal reason for downfall of any empire, including the brave Roman.

              • GreenWin

                Lily, for centuries kings and their armies have indoctrinated peasants
                to believe in a fire-breathing dragon living at the edge of town.
                Only the king’s army could keep the peasants safe. And so they became
                willing slaves of the king.

                Imagine a world with enough for everyone. Allowing those who want more to obtain it NOT at the expense of others. If matter is meaningless to the spiritual realm, then so too is material wealth. In spirit, a pauper and a king are no different. Each can be good, charitable, and benevolent; because
                “wealth” rests in the heart – not in the pocketbook.

                • Daniel Maris

                  That’s silly. We face real threats.

              • Omega Z

                Rome did not fall because of it’s military adventures.
                It fell because they became to comfortable in their stable society. There were no longer marauding armies at their gates. It was all far away & they did not maintain their Military prowess.

                Over time, they even ceased to support their allies who came under attack & eventually their allies joined their enemies. Soon, the marauding armies were once again at their gates. They could not call on their allies for help because It was their former allies who were at the gates.

                They had become fat & lazy enjoying their peaceful society & had lost the knowledge of the art of war. An Art that was hard learned in previous generations when marauding armies were common in their region. If you are neither willing or able to fight for it, you will have no peace. History tells you this. Repeatedly.

                • Daniel Maris

                  There are lots of theories about why Rome fell. One factor was certainly erosion of the tax base.

                • friendlyprogrammer

                  I thought Rome Fell because a small cult calling themselves Catholics committed Forgery and Defrauded the ownership of Western Rome which included much of today’s Europe in the biggest crime in history.

                  Bill gates robbery of Apples stolen Xerox programming would not compare to a fraction of a percentage of what the Catholics Stole.

                  Then once the now Catholic Armies started loose on the populations mass murdering entire towns and villages of non Catholics, they had less armies to protect their borders, and such.

                  Had Julius Ceasar not existed then his nephew Octavious Augustus never would have been able to impose himself as Romes first Emperor and if the republic had still existed then no Catholic Fraud could defraud an emperor 300 years down the road.

                  The Forgery of Constantine was the fall of Rome. Not fat and laziness. So I disagree.

            • Omega Z

              To rid yourself of a military before it’s time can be folly.

              However, I would like to see the world come together & use such resources for space research and exploration & even the depths of the oceans. Note that the funding of a Military also acts as a major economic driver. 1.5 dollars for every dollar spent with Jobs creation & technological breakthroughs in science. Sadly, it involves killing & destruction.

              An International Space program would be a much greater economic driver, 7 dollars for every dollar spent & does not require the intentional taking of life. Nations could take pride in their space endeavors rather then their military prowess.