Guest Posting

If you wish to write and/or publish an article on Operation Disclosure all you need to do is send your entry to applying these following rules.

The subject of your email entry should be: "Entry Post | (Title of your post) | Operation Disclosure"

- Must be in text format
- Proper Grammar
- No foul language
- Your signature/name/username at the top

Send your entry and speak out today!

News Alerts


The timely passing of the spending bill was a significant move prior to the 26th.

What is "actually" contained in the spending bill will benefit us all.

Everything officially released is scripted and already done or agreed upon behind the scenes.

The Petrodollar will be forgotten once oil starts trading in gold-backed Yuan by the 26th.

The end of the Petrodollar is the end of Cabal leverage in the global economy.

The trading of oil in gold-backed currency will trigger the new financial system.

The RV was said to begin before the new financial system is triggered.

RV exchanges/redemptions will be processed through the new financial system's back screen rates via private appointment.

Your exchanged/redeemed funds will be in gold-backed Yuan or USN.

Withdrawal of these funds will temporarily be in your local fiat currency until the new financial system is officially triggered and all rates are reset.

Stay seated and enjoy the show.

Change is coming.




Featured Post

Restored Republic via a GCR as of March 23, 2018

Restored Republic via a GCR Update as of March 23 2018 Compiled 12:01 am EDT 23 March 2018 by Judy Byington, MSW, LCSW, ret. CEO, Child Ab...

Tuesday, February 21, 2017

New Google AI is Learning and Can Become Highly Aggressive

Source: Truth Theory | by Jess Murray

Cautions have been issued about Google's DeepMind AI system after it was discovered that the robot system has the ability to learn independently from its own memory and even become aggressive in certain situations.

A previous warning about the advancement in artificial intelligence came just last year from Stephen Hawking, claiming that it will either be "the best, or the worst thing, ever to happen to humanity". And it seems that the latter may be more of a reality if careful precautions and monitoring are not taken. Results from recent behaviour tests of Google's new DeepMind AI system demonstrated the independent advancement of the robots, and how it could even beat the world's best Go players at their own game, as well as figuring out how to seamlessly mimic a human voice.

Since then, researchers have been testing the robot's willingness to cooperate with others, and have announced their findings by explaining that when DeepMind feels like it might lose, it opts for strategies that have been labelled as "highly aggressive" in order for it to ensure that it comes out on top. The test that led to this discovery was through a computer game of 'fruit gathering'. The Google team ran 40 million turns of the simple game that asked two DeepMind 'agents' to compete against each other, where they had to retrieve as many virtual apples as they could. The results demonstrated that as long as there were plenty of apples for both of them there was no issue, but as soon as the apples began to dwindle, both agents began to turn aggressive and used laser beams to knock each other out of the game so that they could then steal their opponents apples for themselves.

These results differed from the 'less intelligent' iterations of DeepMind, who opted out of using the laser beams when they were given the same test, which meant that they could end up with equal shares of apples. Rhett Jones reported for Gizmodo that when the researchers used smaller DeepMind networks as the agents, there was a greater likelihood for peaceful co-existence. However, as more complex networks of agents began to appear, sabotage was increasingly more likely.

The researchers then suggested that the more intelligent the agent was, the more that it was able to learn from its surrounding environment, which allowed it to use highly aggressive tactics to ensure that it came out as the best. Joel Z Leibo, a member of the team, told Matt Burgess at Wired, "This model … shows that some aspects of human-like behaviour emerge as a product of the environment and learning. Less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action. The greed motivation reflects the temptation to take out a rival and collect all the apples oneself."

To counteract this, another game was introduced to the agents, which taught them that co-operation with each other could earn them higher rewards. This was a success, and demonstrated that when the AI systems are put in different situations, there must be balance which means that reaching a goal can benefit humans above all else, and that achieving this would be the best outcome for them. Further tests will now be done to ensure that the AI systems will always have people's interests at heart.


IMAGE CREDIT:carloscastilla / 123RF Stock Photo

Receive News from Operation Disclosure via Email

Shoutbox Disclaimer

Please be advised that the Shoutbox is NOT moderated. Use it at your own will.