What's new

components for robotic soldiers that can kill on their own being developed

Killer robot already here to exterminate
Can you outrun this machine killer?



Remember what happened to groups who fought with sword when they went against folks with guns :coffee: should be a wake up call people should be a wake up call


5f9cce022b89b33c5de174ce8555683c.jpg
 
Last edited:
one sec though, a robot that can run and jump, could be used in fire-fighting (saving lives in home and office fires), or fighting wild-fires that threaten homes.

a robot that can do house-chores well can assist the elderly, even bringing them to other human company safely, and back again to their own home.
as you might know a lot of elderly that are not wealthy are not getting the most basic care needed these days, and have to spend their days in uncomfortable retirement homes.

it's a specific package of technology that has to be avoided here, namely creating (semi-)autonomous[1] *warriors*.
in the not-so-distant future (10 to 40 years and beyond) all the components would be available at any time to put such kill robots together with increasing ease and efficiency. that's what's gotta be avoided with the kind of public awareness that only mass-media can provide if they put a topic 'on repeat'.
i think any large news agency would do well to repeat this topic and it's latest developments at least 4 times per year.

[1] autonomous means to (largely) act on it's own
 
an in case of an EMP attack? the robot soldier turns into paper weight?
it will be *easy* to shield against EMP.

all you need to do is surround the control circuits and actuators (device that make the limbs move) with metal.
kind of easy to do considering the thing will be made of metal.

pff thats like saying you can use the gun to shot a button to start the fire extinguishers
it is.

but i haven't found a way yet to erode the mechanisms that make the rich richer and the poor poorer, by which the rich ensure that the poor masses have an actual need for robotic help in households and industries.
it would be far better to distribute money more evenly and see human workers continue to do the work that robots are set to do in civilian societies, and so prevent such research from being profitable.

but it would have to be stopped at the policy level, public debate level and massmedia level anyways, regardless of how robots are set to be used in civilian life.
 
https://www.rt.com/news/427313-indi...tm_source=rss&utm_medium=rss&utm_campaign=RSS

:(

India eyes developing autonomous killer robots for military
Published time: 21 May, 2018 13:16
Get short URL
5b02a75fdda4c82b198b45a3.jpg

© Ben Stansall / AFP
India is assessing whether it needs to develop AI-based weapon systems for the military, capable of identifying and attacking targets without human input. The tech can't be reasoned with and won’t feel pity, remorse or fear.
The 17-member AI task force, which includes officials from the Indian military, defense ministry, arms contractors and research organizations, was formed in February. New Delhi sees AI technologies as potentially reshaping national security and defense and wants to keep up with leaders in the field.

Among the goals which the group works on is “developing intelligent, autonomous robotic systems,”Ajay Kumar, the secretary of the Defense Production Department in the Indian Defense Ministry, toldThe Times of India.

“The world is moving towards AI-driven warfare. India is also taking necessary steps to prepare our armed forces because AI has the potential to have a transformative impact on national security. The government has set up the AI task force to prepare the roadmap for it,” he said.

via GIPHY

The government is expected to start placing initial tenders for AI capabilities with defense application within two years, Kumar added.

AI technologies, or more precisely algorithms using machine learning for better performance, have seen rapid development in the past few years. If adopted by the military, they can be used for automatic target acquisition, automated analysis of intelligence data, improvement of logistics and other tasks.

The same technologies may potentially become superior to humans in some combat roles, beating the organic operators of remotely-controlled weapons in their reaction times and accuracy. But developing fully autonomous weapon systems poses yet-to-be answered questions about moral and legal ramifications of entrusting life-and-death decisions to computer algorithms.
 
https://theworldnews.net/ph-news/te...-dangerously-destabilizing-force-in-the-world

Tech leaders say killer robots would be 'dangerously destabilizing' force in the world

MUSK. Elon Musk is among the leaders of the 160 organizations that have signed the pledge against automated weapons. File photo by Hector Guerrero/AFP

The list is extensive and includes some of the most influential names in the overlapping worlds of technology, science and academia.

Among them are billionaire inventor and OpenAI founder Elon Musk, Skype founder Jaan Tallinn, artificial intelligence researcher Stuart Russell, as well as the three founders of Google DeepMind – the company's premier machine learning research group.

In total, more than 160 organizations and 2,460 individuals from 90 countries promised this week to not participate in or support the development and use of lethal autonomous weapons. The pledge says artificial intelligence is expected to play an increasing role in military systems and calls upon governments and politicians to introduce laws regulating such weapons in an effort "to create a future with strong international norms."

"Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems," the pledge says.

"Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage," the pledge adds. (READ: 23 principles to 'best manage AI in coming decades')

Lethal autonomous weapons systems can identify, target, and kill without human input, according to the Future of Life Institute, a Boston-based charity that organized the pledge and seeks to reduce risks posed by AI. The organization claims autonomous weapons systems do not include drones, which rely on human pilots and decision-makers to operate.

According to Human Rights Watch, autonomous weapons systems are being developed in many nations around the world – "particularly the United States, China, Israel, South Korea, Russia and the United Kingdom." FLI claims autonomous weapons systems will be at risk for hacking and likely to end up on the black market. The organization argues the systems should be subject to the same sort of international bans as biological and chemical weapons.

FLI has even coined a name for these weapons systems – "slaughterbots."

The lack of human control also raises troubling ethical questions, according to Toby Walsh, a Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney, who helped to organize the pledge.

"We cannot hand over the decision as to who lives and who dies to machines," Walsh said, according to a statement from FLI. They do not have the ethics to do so. I encourage you and your organizations to pledge to ensure that war does not become more terrible in this way."

Musk – arguably the pledge's most recognizable name – has become an outspoken critic of autonomous weapons and the rise of autonomous machines. The Tesla chief executive has said that artificial intelligence is more of a risk to the world than North Korea.

Last year, he joined more than 100 robotics and artificial intelligence experts calling on the United Nations to ban autonomous weapons.

"Lethal autonomous weapons threaten to become the third revolution in warfare," Musk and 115 other experts, including Alphabet's artificial intelligence expert, Mustafa Suleyman, warned in an open letter in August.

"Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at time scales faster than humans can comprehend."

According to the letter, "These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways."

Fighting killer robots with public declarations might seem ineffective, but Yoshua Bengio – an AI expert at the Montreal Institute for Learning Algorithms – told the Guardian that the pledge could rally public opinion against autonomous weapons.

"This approach actually worked for land mines, thanks to international treaties and public shaming, even though major countries like the US did not sign the treaty banning landmines," he said. "American companies have stopped building landmines." – © 2018. Washington Post

:yahoo:

a useful search for the interested, by the way :
google search : "automated weapons" or "killer robots", limited to search results of the year preceding your search-request :
https://www.google.nl/search?q="automated+weapons"+or+"killer+robots"&source=lnt&tbs=qdr:y&sa=X&ved=0ahUKEwjBjNufibDcAhUQbFAKHcWzAFYQpwUIIA&biw=876&bih=836
 
https://www.bbc.com/news/technology-45497617

MEPs vote to ban 'killer robots' on battlefield
  • 12 September 2018
  • _103394126_killerrobot2.gif
    Image copyrightGETTY IMAGES
Image captionKiller robots are not science fiction, one MEP says - although they probably won't look like this
The European Parliament has passed a resolution calling for an international ban on so-called killer robots.

It aims to pre-empt the development and use of autonomous weapon systems that can kill without human intervention.

Last month, talks at the UN failed to reach consensus on the issue, with some countries saying the benefits of autonomous weapons should be explored.

And some MEPs were concerned legislation could limit scientific progress of artificial intelligence.

While others said it could become a security issue if some countries allowed such weapons while others did not.

"I know this might look like a debate about some distant future or about science fiction. It's not," said Federica Mogherini, the EU chief of foreign and security policy during the debate at the European Parliament.

'Arms race'
"Autonomous weapons systems must be banned internationally," said Bodil Valero, security policy spokeswoman for the EU Parliament's Greens/EFA Group.

"The power to decide over life and death should never be taken out of human hands and given to machines."

The resolution comes ahead of negotiations scheduled at the United Nations in November, where it is hoped an agreement on an international ban can be reached.

In August, experts from a range of countries met at the UN headquarters in Geneva to discuss ways to define and deal with computer-controlled weapons.

"From artificially intelligent drones to automated guns that can choose their own targets, technological advances in weaponry are far outpacing international law," Rasha Abdul Rahim, a researcher on artificial intelligence, at Amnesty International, said at the time.

"It's not too late to change course. A ban on fully autonomous weapons systems could prevent some truly dystopian scenarios, like a new high-tech arms race between world superpowers which would cause autonomous weapons to proliferate widely," he added.

But some countries - including Israel, Russia, South Korea and the US - opposed new measures at the August meeting, saying that they wanted to explore potential "advantages" from autonomous weapons systems.

https://euobserver.com/tickers/142817

MEPs want international ban on 'killer robots'
By EUOBSERVER

12. SEP, 15:55
The European Parliament has called for an international ban on lethal autonomous weapons, known colloquially as 'killer robots'. A non-binding text, adopted on Wednesday with 566 MEPs in favour, 57 against, and 73 abstaining, said that the 28 EU member states should have a common position on autonomous weapons by November and "speak in relevant forums with one voice".

:yahoo:
 

Users Who Are Viewing This Thread (Total: 1, Members: 0, Guests: 1)


Back
Top Bottom