Guest Posting

If you wish to write and/or publish an article on Operation Disclosure all you need to do is send your entry to UniversalOm432Hz@gmail.com applying these following rules.


The subject of your email entry should be: "Entry Post | (Title of your post) | Operation Disclosure"

- Must be in text format
- Proper Grammar
- No foul language
- Your signature/name/username at the top

Send your entry and speak out today!

News Alerts

RV/INTELLIGENCE ALERT - February 23, 2018


On the 19th-20th, removal operations of underground Cabal outposts around Idaho and Wyoming were occurring.


Sources reported that Cabal-MIC military formations were moving from Idaho to Wyoming.


These military formations were intercepted by the Alliance's SSP fleet over Nevada as seen below:


https://youtu.be/xEmNLadC144


The Cabal-MIC military formation was annihilated. A few ships were allowed to retreat in order to track where they were headed.


The retreating Cabal-MIC ships disappeared near Torc Mountain located in Ireland.


Alliance Ghost Operative Teams on the ground were able to find the exact location of their outpost through documents found at one of their previous outposts near Yellowstone.


(Note: All info that is leaked already occurred and is no longer compromising to the plans and security of the Alliance.)


The RV will begin once all levels of Cabal threats have been neutralized.


If this is not accomplished in due time, the RV will begin before the financial system crashes.


The checklist for the RV release only needs one last check ✔ for the following:


- Neutralization of the Cabal


Deadline: Before the financial system collapses.


---


FOR MORE INFORMATION ABOUT THE RV/GCR VISIT:


http://www.dinarchronicles.com/intel.html


---

Featured Post

(Video) David Wilcock -- Disclosure, Cabal's Defeat, Ancient Aliens, and Inner Earth

Published on Feb 19, 2018

Tuesday, February 21, 2017

New Google AI is Learning and Can Become Highly Aggressive



Source: Truth Theory | by Jess Murray

Cautions have been issued about Google's DeepMind AI system after it was discovered that the robot system has the ability to learn independently from its own memory and even become aggressive in certain situations.

A previous warning about the advancement in artificial intelligence came just last year from Stephen Hawking, claiming that it will either be "the best, or the worst thing, ever to happen to humanity". And it seems that the latter may be more of a reality if careful precautions and monitoring are not taken. Results from recent behaviour tests of Google's new DeepMind AI system demonstrated the independent advancement of the robots, and how it could even beat the world's best Go players at their own game, as well as figuring out how to seamlessly mimic a human voice.

Since then, researchers have been testing the robot's willingness to cooperate with others, and have announced their findings by explaining that when DeepMind feels like it might lose, it opts for strategies that have been labelled as "highly aggressive" in order for it to ensure that it comes out on top. The test that led to this discovery was through a computer game of 'fruit gathering'. The Google team ran 40 million turns of the simple game that asked two DeepMind 'agents' to compete against each other, where they had to retrieve as many virtual apples as they could. The results demonstrated that as long as there were plenty of apples for both of them there was no issue, but as soon as the apples began to dwindle, both agents began to turn aggressive and used laser beams to knock each other out of the game so that they could then steal their opponents apples for themselves.

These results differed from the 'less intelligent' iterations of DeepMind, who opted out of using the laser beams when they were given the same test, which meant that they could end up with equal shares of apples. Rhett Jones reported for Gizmodo that when the researchers used smaller DeepMind networks as the agents, there was a greater likelihood for peaceful co-existence. However, as more complex networks of agents began to appear, sabotage was increasingly more likely.

The researchers then suggested that the more intelligent the agent was, the more that it was able to learn from its surrounding environment, which allowed it to use highly aggressive tactics to ensure that it came out as the best. Joel Z Leibo, a member of the team, told Matt Burgess at Wired, "This model … shows that some aspects of human-like behaviour emerge as a product of the environment and learning. Less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action. The greed motivation reflects the temptation to take out a rival and collect all the apples oneself."

To counteract this, another game was introduced to the agents, which taught them that co-operation with each other could earn them higher rewards. This was a success, and demonstrated that when the AI systems are put in different situations, there must be balance which means that reaching a goal can benefit humans above all else, and that achieving this would be the best outcome for them. Further tests will now be done to ensure that the AI systems will always have people's interests at heart.

THIS ARTICLE IS OFFERED UNDER CREATIVE COMMONS LICENSE. IT'S OKAY TO REPUBLISH IT ANYWHERE AS LONG AS ATTRIBUTION BIO IS INCLUDED AND ALL LINKS REMAIN INTACT.


IMAGE CREDIT:carloscastilla / 123RF Stock Photo

Receive News from Operation Disclosure via Email

Shoutbox Disclaimer

Please be advised that the Shoutbox is NOT moderated. Use it at your own will.