#Ptech built Supply Chain Infrastructure - Destruction of Free Market
#ProTips #GIG #GlobalInformationGrid #IoT #InternetofThings
#Ptech built Supply Chain Infrastructure - Destruction of Free Market
#ProTips #GIG #GlobalInformationGrid #IoT #InternetofThings
Total Intelligence (by @TransAlchemy)
Music by Crystal Castles (‘baptism’)
OODA helps our clients identify, manage, and respond to global risks and uncertainties while exploring emerging opportunities and developing robust and adaptive strategies for the future. OODA is comprised of a unique team of international experts capable of providing advanced intelligence and analysis, strategy and planning support, investment and due diligence, risk and threat management, training, decision support, crisis response, and security services to global corporations and governments. Our team has been at the forefront of next-generation threat/risk analysis and global business trends for almost two decades. OODA maintains a diverse network of international specialists drawn from industry, government, and academia in the United States and dozens of international locations. Our team maintains expertise in a variety of disciplines including counterterrorism, information warfare, low-intensity political violence, computer security, intelligence, predictive analysis, risk and threat management, economic & geopolitical analysis, law enforcement, crisis response, national security, and defense policy.
Last week, as I noted in my interview with Barrett from prison, Barrett’s mother plead guilty to her charge of obstructing evidence: she hid his computers from the FBI. Late last night, the news broke through the “Free Barrett Brown” Twitter account that Brown’s Wiki, ProjectPM, which is described on the project’s Twitter page as being, “Dedicated to research of government corruption, sitting in bubble baths drinking wine,” was being subpoenaed by the Department of Justice.
ProjectPM is an online compendium where Barrett and his fellow researchers share information they’ve been gathering about the intelligence industry in the United States. The Department of Justice is suing the company’s hosting provider, CloudFlare. While ProjectPM appeared to have gone down on Wednesday, it seems the site is back up. This kind of spotty connection has been very common for the site over the past few months. Even Googling ProjectPM does not yield any results that point to the site.
That said, certain articles on the site are available through Google Cache. One of the more disturbingly intriguing articles available is on Persona Management, the software developed by intelligence companies to develop phony online identities that can be used to manipulate others and disseminate propaganda. The article details a conversation—allegedly discovered through stolen internal emails, between Aaron Barr the former CEO of the security company HBGary and the former CEO of Mantech—where Aaron is demonstrating a primitive fake persona meant to “represent an intelligence contracting employee and USAF veteran, on Facebook and Twitter.” ProjectPM also claims that Aaron Barr and HBGary were out to “infiltrate Anonymous.”
Another article about Persona Development is even more concerning. The article details a PDF supposedly taken from a correspondence between Aaron Barr and Robert Frisbie that describes the tiers of fake personas and how believable they can actually become. It states that the “most detailed character[s]” also known as a “Level 3” are “required to conduct human-to-human direct contact likely in-person to satisfy some more advanced exercise requirements.
This character must look, smell, and feel 100 percent real at the most detailed level. This character will need to be associated with a real company, hold a real position with that company and have all the technical and business artifacts associated with the position and organization. The trick here is while the persona needs to be real, the actual person may not be working in this role 100 percent of the time. In these cases there are still tricks that can be used to more rapidly age or update accounts. One such trick is to build outward facing accounts such as twitter, YouTube, or blogs with generic names.”
If ProjectPM goes down, there is a similar site out there operated by the hacktivist group Telecomix. They run a Wiki called Bluecabinet that serves as a counterpart to Barrett’s own ProjectPM. I spoke to a volunteer for Bluecabinet, before the Department of Justice’s subpoena against ProjectPM, who described the differences between the two research projects to me: “Barrett Brown came to the Bluecabinet IRC mostly to discuss specific companies. He said that he liked Telecomix and Bluecabinet because we were more mature. But, both ProjectPM and Bluecabinet are concerned about the militarization of the internet and abuse of technology by governments that target the public, especially information activists.”
While Telecomix continues to do the same type of work as Barrett Brown, through their Bluecabinet Wiki, they do not seem discouraged by the punishment that Barrett is facing: “Barrett Brown was obviously targeted. He was outspoken and stood out as a journalist activist. The US government’s prosecution of information activists is so extreme, I’m concerned that they would create a honeypot or entrap me or other researchers. Obviously someone was monitoring Barrett in the IRC chatroom and documenting what links to data he posted. But his arrest has not slowed down the volunteer work of Bluecabinet at all. It has just made us more careful.”
ProjectPM’s lawyer, Jason Flores-Williams, has already launched a “Motion to Intervene and Quash Subpeona” and they have also published a press release online. In it, the Department of Justice’s subpoena is compared to the censorship in China: “The Department of Justice is abusing its subpoena power to invade lives, threaten freedoms and destroy people for simply exploring the truth about their government. Like China, they are trying to suppress and control the free flow of information and ideas.”
As reported yesterday in the Dallas News, the US Attorney’s office has requested that the motion is dismissed. According to the office, Flores-Williams is not “licensed to practice law in Texas and he failed to explain why it was not possible to confer with the government.” So far, there has been no response from the judge.
While this legal battle wages on, Barrett Brown will be sitting in jail for a full year before he even sees a judge. So far, ProjectPM has served as an online monument to Barrett’s work that has survived beyond his isolation from the real world, but if the Department of Justice succeeds in its case to take the Wiki down, that all may be lost.
4 star Cyber General doesn’t know what an IP Address is - #NWO
And this is one of the guys that’s at the head of America’s cyber defense team.
#Cyborg Upgrade (Hugo de Garis vs. Koj3nt) (by @TransAlchemy)
With Carnegie Mellon’s cloud-centric new mobile app, the process of matching a casual snapshot with a person’s online identity takes less than a minute. Tools like PittPatt and other cloud-based facial recognition services rely on finding publicly available pictures of you online, whether it’s a profile image for social networks like Facebook and Google Plus or from something more official from a company website or a college athletic portrait. In their most recent round of facial recognition studies, researchers at Carnegie Mellon were able to not only match unidentified profile photos from a dating website (where the vast majority of users operate pseudonymously) with positively identified Facebook photos, but also match pedestrians on a North American college campus with their online identities.
The repercussions of these studies go far beyond putting a name with a face; researchers Alessandro Acquisti, Ralph Gross, and Fred Stutzman anticipate that such technology represents a leap forward in the convergence of offline and online data and an advancement of the “augmented reality” of complementary lives. With the use of publicly available Web 2.0 data, the researchers can potentially go from a snapshot to a Social Security number in a matter of minutes
The Internet never forgets a face. Read more at The Atlantic
Barrett Brown live interview - corporate spies, Anonymous, espionage exp0sed
audio is rly off, funny cuz the user channel is “HongPong” harharhar.
YOU RAFF! U RUSE!
Can the complexity of cities really be reduced to a single set of equations, as the physicist Geoffrey West claims, or even 3,000 of them? Is it really true, as West’s numbers would indicate, that Corvallis, Oregon—a city of 55,000 two hours’ drive south of Portland—is the most innovative city in America? Perhaps there’s something in the water, or it may have more to do with the fact that West’s model loves patents and Hewlett Packard’s Advanced Products Division is based there, along with its patent portfolio, one developed by thousands of researchers worldwide.
West’s conclusions are only as good as the data and the models (patents equal innovation) he has to work with. This problem—if you can’t measure it, you can’t manage it—combined with the impulse to improve cities by models, is driving both IBM’s “smarter city” strategy and the nascent “urban systems” movement, which seek to apply complexity science to cities. IBM sponsored the first Urban Systems Symposium in May (where West co-starred in a show-stopping discussion with Paul Romer and Stewart Brand) and today announced the latest plank in its smarter city platform: an “app” containing 3,000 equations which collectively seek to model cities’ emergent behavior. IBM also revealed its first customer, the City of Portland, Oregon.
Systems Dynamics for Smarter Cities, as the app is called, tries to quantify the cause-and-effect relationships between seemingly uncorrelated urban phenomena. What’s the connection, for example, between public transit fares and high school graduation rates? Or obesity rates and carbon emissions? To find out, simply round up experts to hash out the linkages, translate them into algorithms, and upload enough historical data to populate the model. Then turn the knobs to see what happens when you nudge the city in one direction.
“While other analytical approaches rely on breaking a problem down into smaller and smaller pieces, the model we’ve created recognizes that the behavior of a system as a whole can be different from what might be anticipated by looking at its parts,” Michael Littlejohn, vice president of strategy for Smarter Cities at IBM, said in this morning’s press release.
IBM pitched Portland in 2009 to assist in the creation of the “Portland Plan,” a 25-year road map for the city and its related government agencies, the first draft of which will be released later this month. The city’s goal was more modest than IBM’s, less a model of everything than a chance to “shine a light on the biggest drivers of change,” according to Joe Zehnder, the city’s chief planner. One of those drivers is the city’s commitment to a 40 percent decrease in carbon emissions by 2030, which necessitates less driving and more walking and biking. Running the model, Zehnder and his fellow planners found that obesity rates fell as the number of cyclists rose, which led to a further increase in biking. This knowledge proved useful both in formulating policy (more bicycle lanes, anyone?) and in creating metrics to measure their success down the road.
But Zehnder is also quick to point out the limitations of such software, both in terms of their ability to sway the public and to simulate reality. “As we sat down with the modelers, we had to make the point to them that we will not be able to convince our constituents to trust anything coming out of a black box,” he says, adding “the whole act of choosing variables is a political one, a value-laden one,” underscoring the fact that choosing what to measure and what not to measure not only compromises the integrity of any model’s ability to reflect reality, but also the prerogatives of the ones building the model.
As the field of urban systems gathers steam, it’s important to remember that IBM and its fellow technology companies aren’t the first to offer a quantitative toolkit to cities. In 1968, New York City mayor John Lindsay (no relation) announced the creation of the New York City-RAND Institute, an effort to apply the “RAND method” of game theory and “systems analysis” to city management, and as Lindsay put, complete “the introduction into city agencies of the kind of streamlined, modern management that Robert McNamara applied in the Pentagon with such success in the past seven years.”
As Joe Flood described in his book The Fires, it turned out the Vietnam-era Pentagon was not the best role model. RAND promptly began building models they thought could predict fire patterns in New York, and then used them to justify closing fire stations in the poorest sections of the Bronx, Brooklyn and Harlem in the name of efficiency, a decision that would ultimately displace 600,000 people as their neighborhoods burned.
New York was hardly alone. Across the country, mayors appealed to the best and brightest of RAND, Lockheed, and McDonnell to apply themselves to the “urban crisis” of the 1960s, leading urban planners to co-opt their rationalist mindset and many of their technologies, most notably GIS and satellite imagery. And what was in it for RAND? An opportunity to diversify beyond their Air Force contracts.
None of this is to say Portland will burn because of its decision to let IBM help it create a few metrics. But with mayor Sam Adams earnestly declaring that its software “can help us become a Smarter City” (as per the press release) it’s important to keep in the mind the limitations of modeling cities, and the dangers of forgetting.
[Images: Top, Flickr user jcestnik; bottom: IBM]
Earlier this year, ThinkProgress obtained 75,000 private emails from the defense contractor HBGary Federal via the hacktivist group called Anonymous. The emails led to two shocking revelations. First, that an assortment of private military firms collectively called “Team Themis” had been tapped by Bank of America to conduct a cyber war against reporters sympathetically covering the Wikileaks revelations. And second, that late in 2010, the same set of firms began work separately for the U.S. Chamber of Commerce, a Republican-aligned corporate lobbying group, to develop a similar campaign of sabotage against progressive organizations, including the SEIU and ThinkProgress.
In presentations obtained by ThinkProgress from the e-mail dump detailing the tactics potentially used against progressives, HBGary Federal floated the idea of using “fake insider personas” to infiltrate left-leaning groups critical of the U.S. Chamber of Commerce’s policies. As HBGary Federal executive Aaron Barr described in several emails, his firm could work with partner companies Palantir and Berico Technologies to manipulate fake online identities, using networks like Facebook, to gain access to private information from his targets. Other presentations are more specific and describe efforts to use social media to hack computers and find vulnerabilities among even the families of people who work at organizations critical of the Chamber.
In one email from the dump, Barr discusses a fake persona he created called “Holly Weber.” She would be born in Portland in 1984, attend Reynolds High School, and work for Lockheed Martin after a stint in the Air Force. Earlier this week, Twitter users actually identified the phony account. Before it was taken down, ThinkProgress snagged screen shots of the fake persona’s Facebook and LinkedIn accounts. (Barr also described his strategy for pretending to be teenagers online). View a screenshot of the fake account below:
Barr, who sold his illicit talents to the highest bidder, appears to be drawing on Maxim for inspiration. A Maxim covergirl named Holly Weber was also born in 1984. Unlike Barr’s creation, the Maxim one is real.
Hunton and Williams, the law firm representing the U.S. Chamber of Commerce, had been immersed in talks with HBGary Federal, Palantir, and Berico to deliver on a $2 million deal to move forward with the hacking plot against the Chamber’s critics. However, after Anonymous leaked HBGary’s emails and a few reporters picked up on the story, the Chamber distanced itself from the deal. The emails show that HBGary Federal had also worked to sell “persona management” solutions to the U.S. government for cyber intelligence work.
[- edit: for more links / info check out @phracker -]
see also cyberspace page
You’re not who you think you are;
you’re not who others think you are;
you’re who you think others think you are!
—- Source Unknown
Humankind cannot bear very much reality.
—- T. S. Eliot
Not everything that counts can be counted, and not everything that can be counted counts.
—- Albert Einstein
All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident.
—- Arthur Schopenhauer
When the legend becomes fact, print the legend.
—- The Man Who Shot Liberty Valence (1962 movie)
They will argue that change occurs so rapidly in today’s information-based society that the United States must be proactive in incorporating the lessons of Desert Storm into its future defense plans. Actually, this view is dangerously myopic. Abundant evidence exists to suggest that the twenty-first century could be dominated by culturally based conflict. The strategy of paralysis is ineffective against such an amorphous threat. Therefore, creating a US military force that is overly dependent on a high-technology air arm would be, to use Howard’s words, too wrong.
The issue of species dominance will dictate our global politics this century. Given the rate at which technologies are developing that enable “artilects”—artificial intellects—it is likely that humanity will be able to build artilects with mental capacities that are literally trillions upon trillions of times above the human level. Humanity will then have to choose whether to become the No. 2 species on the planet or not.
The AI Goldmine
In the coming few decades, the rise of artificial intelligence will be a veritable goldmine for humankind. I predict that by the year 2030, one of the world’s biggest industries will be “artificial brains,” used to control home robots that will be genuinely intelligent and useful. Millions, if not billions, of people will be prepared to spend more money on a household robot than on a car. It is my personal ambition in the next five to 10 years to persuade the federal government in China (where I’m directing the building of China’s first artificial brain) to create a CABA (Chinese Artificial Brain Administration), similar in scope to America’s NASA, consisting of thousands of scientists and engineers, to build artificial brains for the Chinese home robot industry and other applications. I suggest that the U.S. do something similar—a NABA.
Moore’s Law and e-Neuroscience
Moore’s Law states that the number of transistors on a chip doubles every 18 months. This trend has been valid for over 40 years and is likely to continue until around 2020, by which time we will be able to place one bit of information on a single atom. These atom-bits will be able to switch their state (a 0 or a 1) in femtoseconds, which are quadrillionths (1015) times of a second. There are a trillion, trillion (1024) atoms in a handheld object, such as an apple, so potentially, the information processing capacity of such an object would be about 1040 bits per second. Compare this number with the estimated equivalent of the human brain, which is about 1016 bits per second, or a trillion, trillion times smaller. You’ll begin to see why I believe that the rise of the artilect, a godlike intelligent machine, will be so disruptive later this century.
You may object that a massive bit-processing rate is only a necessary (but not sufficient) condition for generating hyper-intelligence. Agreed. What is also needed is the appropriate human brain-like neural circuitry, but this is coming too. Nanotechnology, or molecular scale engineering, is increasingly supplying the tools to decipher the secrets of human brain function. Today, thanks to Henri Markram’s work in Switzerland, every neural connection is known in a single cortical column of a rat brain’s cortex. (A rat has about a thousand such columns, each consisting of about 10,000 highly interconnected neurons, and the human brain contains about a million.)
This detailed connectivity knowledge has been put into supercomputers, so that computer-savvy neuroscientists can perform experiments in a computer, that is, conduct “e-neuroscience.” So a supercomputer will be able to perform the same functions as a rat’s cortical column, but a million times faster—at electronic speeds compared to the rat’s chemical speeds. Following Moore’s Law, the whole rat brain will be thus simulated within a decade, and the human brain a decade or two later.
The Species Dominance Debate
So in about a decade there will be a thriving artificial brain industry, and nearly everyone will have a home robot, which will be upgraded every two or three years. Each new home robot generation will be smarter and more useful than the previous generation, so that as the gap between the human intelligence level and the artificial intelligence level gets smaller every year, the species dominance debate will heat up. Millions of people will be asking such questions as: Can the machines become smarter than humans? Is that a good thing? Should there be a legislated upper limit to machine intelligence? Can the rise of machine intelligence be stopped? What if China’s soldier robots are smarter than America’s solder robots?” And so on and so forth.
Considering all this, I predict that humanity will split into three major philosophical, ideological, political groups, which I label as follows.
—The Cosmists (based on the word “cosmos”) will be in favor of building these godlike machines (the artilects), who would be immortal, think a million times faster than humans, have unlimited memory, go anywhere, do anything and take any shape. The Cosmists would take a quasi-religious view that they are god builders. Privately, I am a Cosmist, but publicly, I have mixed feelings about the rise of the artilect.
—The Terrans (based on the word “terra,” meaning the earth) will be opposed to the construction of artilects, fearing that in a highly advanced form, the artilects may decide to wipe us out. To ensure that the probability that this might happen is zero, the Terrans will insist that the artilects are never built in the first place. But this strategy runs utterly contrary to what the Cosmists want. The Terrans will be prepared to go to war against the Cosmists to ensure the survival of the human species.
—The Cyborgists (based on the word “cyborg,” meaning cybernetic organism that is part machine, part human) will want to become artilect gods themselves by adding artilectual components to their own brains, thus avoiding the bitter conflict between the Cosmists and the Terrans.
The Artilect War and Gigadeath
I differ sharply with well-known futurist Ray Kurzweil on his over-optimistic prediction that the rise of the artilect this century will be a positive development for humanity. I think it will be a catastrophe. I see a war coming, the “Artilect War,” not between the artilects and human beings, as in the movie Terminator, but between the Terrans, Cosmists and Cyborgists. This will be the worst, most passionate war that humanity has ever known, because the stakes—the survival of our species—have never been so high. Given the period in which this war will occur, the late 21st century, with late 21st century weapons, the scale of the killing will not be in the millions, as in the 20th century (the bloodiest in history, with 200-300 million people killed in wars, purges, holocausts and genocides) but in the billions. There will be gigadeath.
The Terrans will “First Strike”
Imagine a world in which the cyborgs become increasingly prevalent. A young mother who has just given birth may choose to add a grain of artilectual sand to her newly born baby’s brain, converting it into an artilect. There is so much computing capacity in that grain of sand that she has effectively “killed” her baby. It is no longer human, but an artilect in human disguise. Imagine older parents watching their adult children becoming cyborgs, so that their children are no longer human. The parents will feel they have lost them. The rise of the artilects and the cyborgs will be profoundly disruptive to human culture, creating deep alienation and hatred.
Kurzweil claims that if ever a war occurred between the Terrans and the other groups it would be a quick no-contest battle. The vastly superior intelligence of the artilect group would quickly overcome the Terrans. Therefore I claim that the Terrans will have to strike first while they can, during the “window of opportunity,” when they have comparable intelligence levels. If they wait too long, then Kurzweil’s dismissive view may become valid.
The Cosmist/Terran Split
I give regular talks on the rise of the artilect and invite my audiences to vote on whether they are sympathetic more to the Cosmist view or to the Terran view. The results are always split about evenly. Individuals are torn between the awe of building artilect gods and the horror of the prospect of a gigadeath war. The evenness of the split bodes even more negatively for the future.
Dr. Hugo de Garis is director of the Artificial Brain Lab, in the School of Information Science and Technology at Xiamen University, Xiamen, Fujian Province, China, where he is directing the building of China’s first artificial brain. He is the author of the booksThe Artilect War: Cosmists vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machinesand Multis and Monos: What the Multicultured Can Teach the Monocultured: Towards the Creation of a Global State.
Ford Motor Company is using Google’s Prediction API to improve energy efficiency in its cars, the company said.
The Prediction API is a tool developers can use to, for example, write applications that recommend content such as movies or target key customers. The tool leverages Google’s massive cloud of servers and storage.
At Google I/O in San Francisco May 11, Ford said the API could be used to gauge driver behavior and tune car controls to boost fuel or hybrid-electric efficiencies.
Specifically, Ford is using the prediction software to study driving history, including where a driver has traveled and at what time of day, over the prior two-year period.
Using this driving history, which would be completely voluntary, Ford believes it will be able to divine where a driver is headed at the time of his or her departure.
The motor vehicle maker said it will be able to enable the car to “optimize itself” for the route, thus preserving fuel and/or electricity.
Ryan McGee, technical expert of vehicle controls architecture and algorithm design for Ford Research and Innovation, explained how this works at I/O, albeit on a screen slide show rather than an actual vehicle.
When a vehicle owner opts in to use the service, an encrypted driver data usage profile is built based on routes and time of travel.
When a driver starts the car, Google Prediction software will compare the driver’s historical driving behavior with current time of day and location to predict the most likely destination and how to optimize driving performance to and from that location.
Then, an on-board computer might ask the driver if he or she is going to work. If the driver replied in the affirmative, the car’s computer would kick in a powertrain control strategy for the trip.
For example, a predicted route could include an area restricted to electric-only driving, where upon a plug-in hybrid vehicle could program itself to prescript energy usage over the total distance of the route in order to preserve enough battery power to switch to all-electric mode when traveling.
In addition to being useful for electric and hybrid vehicles, Ford said it could be used for vehicles operating in “low emission zones,” where electric and low-emission vehicles would be allowed to ride in certain zones.
The idea, currently being tested in London, Stockholm and Berlin, is designed to preserve the environment and cut down on traffic. If a vehicle could predict exactly when it might be entering such a zone, it could program itself to comply with regulations, such as switching the engine to all-electric mode.
How the Prediction API would play with Ford’s current Sync navigation and traffic information system, which also leverages the cloud to facilitate communication between vehicles, computers and drivers, is unclear.
Ford’s embrace of Google’s Prediction API comes one year after rival General Motors at Google I/O 2010 added navigation features for its Chevrolet Volt application that help users track their vehicles using cars on Google Maps and search for destinations on Android smartphones.
Operating on a single CPU, it could take Watson two hours to answer a single question. This is a far cry from the three seconds it takes to be competitive on Jeopardy!. For Watson to achieve the speed of the human brain in delivering a single, precise answer to a question requires thousands of POWER7 computing cores working in a massively parallel system.
Watch the video to find out how building a smarter system like Watson involves optimizing hardware and software into a solution greater than the sum of its parts.