Thats just me driving off to my comfort Zone!!
Dont be scared!!! You want a lollipop or something?
I dont take Candy from Strangers
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature currently requires accessing the site using the built-in Safari browser.
Thats just me driving off to my comfort Zone!!
Dont be scared!!! You want a lollipop or something?
I dont take Candy from Strangers
How about a banana? Make you feel good!!
should be a wake up call people should be a wake up call
it will be *easy* to shield against EMP.an in case of an EMP attack? the robot soldier turns into paper weight?
it is.pff thats like saying you can use the gun to shot a button to start the fire extinguishers
India eyes developing autonomous killer robots for military
Published time: 21 May, 2018 13:16
Get short URL
![]()
© Ben Stansall / AFP
India is assessing whether it needs to develop AI-based weapon systems for the military, capable of identifying and attacking targets without human input. The tech can't be reasoned with and won’t feel pity, remorse or fear.
The 17-member AI task force, which includes officials from the Indian military, defense ministry, arms contractors and research organizations, was formed in February. New Delhi sees AI technologies as potentially reshaping national security and defense and wants to keep up with leaders in the field.
Among the goals which the group works on is “developing intelligent, autonomous robotic systems,”Ajay Kumar, the secretary of the Defense Production Department in the Indian Defense Ministry, toldThe Times of India.
“The world is moving towards AI-driven warfare. India is also taking necessary steps to prepare our armed forces because AI has the potential to have a transformative impact on national security. The government has set up the AI task force to prepare the roadmap for it,” he said.
via GIPHY
The government is expected to start placing initial tenders for AI capabilities with defense application within two years, Kumar added.
AI technologies, or more precisely algorithms using machine learning for better performance, have seen rapid development in the past few years. If adopted by the military, they can be used for automatic target acquisition, automated analysis of intelligence data, improvement of logistics and other tasks.
The same technologies may potentially become superior to humans in some combat roles, beating the organic operators of remotely-controlled weapons in their reaction times and accuracy. But developing fully autonomous weapon systems poses yet-to-be answered questions about moral and legal ramifications of entrusting life-and-death decisions to computer algorithms.
Tech leaders say killer robots would be 'dangerously destabilizing' force in the world
MUSK. Elon Musk is among the leaders of the 160 organizations that have signed the pledge against automated weapons. File photo by Hector Guerrero/AFP
The list is extensive and includes some of the most influential names in the overlapping worlds of technology, science and academia.
Among them are billionaire inventor and OpenAI founder Elon Musk, Skype founder Jaan Tallinn, artificial intelligence researcher Stuart Russell, as well as the three founders of Google DeepMind – the company's premier machine learning research group.
In total, more than 160 organizations and 2,460 individuals from 90 countries promised this week to not participate in or support the development and use of lethal autonomous weapons. The pledge says artificial intelligence is expected to play an increasing role in military systems and calls upon governments and politicians to introduce laws regulating such weapons in an effort "to create a future with strong international norms."
"Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems," the pledge says.
"Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage," the pledge adds. (READ: 23 principles to 'best manage AI in coming decades')
Lethal autonomous weapons systems can identify, target, and kill without human input, according to the Future of Life Institute, a Boston-based charity that organized the pledge and seeks to reduce risks posed by AI. The organization claims autonomous weapons systems do not include drones, which rely on human pilots and decision-makers to operate.
According to Human Rights Watch, autonomous weapons systems are being developed in many nations around the world – "particularly the United States, China, Israel, South Korea, Russia and the United Kingdom." FLI claims autonomous weapons systems will be at risk for hacking and likely to end up on the black market. The organization argues the systems should be subject to the same sort of international bans as biological and chemical weapons.
FLI has even coined a name for these weapons systems – "slaughterbots."
The lack of human control also raises troubling ethical questions, according to Toby Walsh, a Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney, who helped to organize the pledge.
"We cannot hand over the decision as to who lives and who dies to machines," Walsh said, according to a statement from FLI. They do not have the ethics to do so. I encourage you and your organizations to pledge to ensure that war does not become more terrible in this way."
Musk – arguably the pledge's most recognizable name – has become an outspoken critic of autonomous weapons and the rise of autonomous machines. The Tesla chief executive has said that artificial intelligence is more of a risk to the world than North Korea.
Last year, he joined more than 100 robotics and artificial intelligence experts calling on the United Nations to ban autonomous weapons.
"Lethal autonomous weapons threaten to become the third revolution in warfare," Musk and 115 other experts, including Alphabet's artificial intelligence expert, Mustafa Suleyman, warned in an open letter in August.
"Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at time scales faster than humans can comprehend."
According to the letter, "These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways."
Fighting killer robots with public declarations might seem ineffective, but Yoshua Bengio – an AI expert at the Montreal Institute for Learning Algorithms – told the Guardian that the pledge could rally public opinion against autonomous weapons.
"This approach actually worked for land mines, thanks to international treaties and public shaming, even though major countries like the US did not sign the treaty banning landmines," he said. "American companies have stopped building landmines." – © 2018. Washington Post

MEPs vote to ban 'killer robots' on battlefield
- 12 September 2018
Image captionKiller robots are not science fiction, one MEP says - although they probably won't look like this
Image copyrightGETTY IMAGES![]()
The European Parliament has passed a resolution calling for an international ban on so-called killer robots.
It aims to pre-empt the development and use of autonomous weapon systems that can kill without human intervention.
Last month, talks at the UN failed to reach consensus on the issue, with some countries saying the benefits of autonomous weapons should be explored.
And some MEPs were concerned legislation could limit scientific progress of artificial intelligence.
While others said it could become a security issue if some countries allowed such weapons while others did not.
"I know this might look like a debate about some distant future or about science fiction. It's not," said Federica Mogherini, the EU chief of foreign and security policy during the debate at the European Parliament.
'Arms race'
"Autonomous weapons systems must be banned internationally," said Bodil Valero, security policy spokeswoman for the EU Parliament's Greens/EFA Group.
"The power to decide over life and death should never be taken out of human hands and given to machines."
The resolution comes ahead of negotiations scheduled at the United Nations in November, where it is hoped an agreement on an international ban can be reached.
In August, experts from a range of countries met at the UN headquarters in Geneva to discuss ways to define and deal with computer-controlled weapons.
"From artificially intelligent drones to automated guns that can choose their own targets, technological advances in weaponry are far outpacing international law," Rasha Abdul Rahim, a researcher on artificial intelligence, at Amnesty International, said at the time.
"It's not too late to change course. A ban on fully autonomous weapons systems could prevent some truly dystopian scenarios, like a new high-tech arms race between world superpowers which would cause autonomous weapons to proliferate widely," he added.
But some countries - including Israel, Russia, South Korea and the US - opposed new measures at the August meeting, saying that they wanted to explore potential "advantages" from autonomous weapons systems.
MEPs want international ban on 'killer robots'
By EUOBSERVER
12. SEP, 15:55
The European Parliament has called for an international ban on lethal autonomous weapons, known colloquially as 'killer robots'. A non-binding text, adopted on Wednesday with 566 MEPs in favour, 57 against, and 73 abstaining, said that the 28 EU member states should have a common position on autonomous weapons by November and "speak in relevant forums with one voice".
