….this article is from 1996, click the title to read the whole ting… kthx.
Not content to let scientists figure out how to engineer animals and plants depending on the situation, DARPA wants to generalize the process, creating a manufacturing framework for all living things. The “Living Foundries” program sets up an assembly line paradigm for life and its constituent parts, and the DOD’s crazy-science arm just handed out its first research grants.
Among the recipients are Caltech, MIT and the J. Craig Venter Institute, a fitting result given the latter group’s prior success in creating the first-ever synthetic organism. The full suite of awards was announced May 22, comprising $15.5 million spread among six companies and institutions.
DARPA announced Living Foundries last summer, with a goal toward an engineering framework that could apply to any living thing. Under this program, genetic engineering would no longer be limited to modification of existing organisms — instead, scientists would be able to concoct anything they wanted from scratch, using a suite of ingredients and processes that could apply in any situation. Such a system is better, DARPA argues, than already promising examples of synthetic biology, which are too laborious, lab-specific and expensive to be universally applicable.
In the beginning, Living Foundries creates a basic library of modularized parts that can be assembled in infinite variations. Like computers borne of circuits and wires, endless forms of life could arise from a brew of proteins and DNA — perhaps bacteria that could eat cancer, maybe renewable fuels, and so on. The ultimate goal is a genetic starter set that could be snapped together like so many Legos, forming any system the military might require.
The contract winners will have to come up with this library of parts, as well as a way to test the new bio-products, Danger Room reports. DARPA also wants the teams to compress the biological build, test and design cycle by 10-fold, in both time and cost.
It may not be far-fetched — synthetic biology is already trending toward greater efficiency, both in the engineered organisms themselves and in the tools scientists are using to develop and evaluate them. As one example, consider the cost of genome sequencing now to the Human Genome Project’s gargantuan price tag. If the charter members of Living Foundries are successful, bio-engineering will become as efficient as factory production.
cc: @WTPWNBC ~ Want An #RFID Chip Implanted Into Your Hand? Here's What The DIY Surgery Looks Like (Video)
Amal Graafstra snaps on a pair of black rubber gloves. “Do you want to talk about pain management techniques?” he asks. The bearded systems administrator across the table, who requested I call him “Andrew,” has paid Grafstra $30 to have a radio-frequency identification (RFID) chip injected into the space between his thumb and pointer finger, and as Graafstra describes Lamaze-type breathing methods, Andrew looks remarkably untroubled, in spite of the intimidatingly high-gauge syringe sitting on the table between them.
Graafstra finishes his pain talk, fishes a tiny cylindrical two-millimeter diameter EM4012 RFID chip out of a tin of isopropyl alcohol, and drops it into the syringe’s end, replacing the RFID tag intended for pets that came with the injection kit. He swabs Andrew’s hand with iodine, carefully pinches and pulls up a fold of skin on the top of his hand to create a tent of flesh, and with the other hand slides the syringe into the subcutaneous layer known as the fascia, just below the surface.
Then he plunges the plastic handle and withdraws the needle. A small crowd of onlookers applauds. The first subject of the day has been successfully chipped.
Here’s a video of the procedure.
Over the course of the weekend, Andrew would be one of eight people to undergo the RFID implantation among the 500 or so attendees of Toorcamp, a hacker conference and retreat near the northwest corner of Washington State. Graafstra’s “implantation station” was set up in the open air: Any camper willing to spend $30 and sign a liability waiver could have the implantation performed, and after the excitement of Andrew’s injection, a small line formed to be next.
And why volunteer to be injected with a chip that responds to radio signals with a unique identifier, a procedure typically reserved for tracking pets and livestock? “I thought it would be cool,” says Andrew, when we speak at a picnic table a few minutes after his injection. (The pain, he tells me, was only a short pinch, followed by a “weird feeling of a foreign body sliding into my hand.”)
The practical appeal of an RFID implant, in theory, is quick authentication that’s faster, cheaper and more reliable than other biometrics like thumbprints or facial scans. When the chip is hit with a radio frequency signal, it emits a unique identifier number that functions like a long, unguessable password. Implantees like Andrew imagine the ability to unclutter their pockets of keys and keycards and instead access their cars, computers, and homes with with a mere wave of the hand.
Andrew says he initially hoped to use his RFID implant instead of the HID identity card his office uses for entry, but wasn’t deterred from the injection when Graafstra told him that HID uses a proprietary system whose chips Graafstra couldn’t implant. “I don’t have anything specific in mind, now, but I didn’t know when I’d have another opportunity to do it,” says Andrew. “And it’s a good excuse to start learning more about RFID.”
Another young hacker who underwent the procedure at Toorcamp said he hopes to install an RFID access system at the door of his local hackerspace. A young woman with a small collection of rings and studs in her ears compared her new implant to aesthetic body modifications like piercings and tattoos, or even the fringier culture of erotic “needleplay.” “I guess I have an interest in my body’s response to pain and modification,” she says. “There’s a certain thrill of the new.”
The Food and Drug Administration has just approved a device that is integrated into pills and let’s doctors know when patients take their medicine – and when they don’t. Adherence to prescriptions is a serious problem, as about half of all patients don’t take medications the way they’re supposed to. But with patients doctors now becoming big brother, that statistic could change drastically.
The device, made by Proteus Digital Health, is a silicon chip about the size of a sand particle. With no battery and no sensor, it is powered by the body itself. The chip contains small amounts of copper and magnesium. After being ingested the chip will interact with digestive juices to produce a voltage that can be read from the surface of the skin through a detector patch, which then sends a signal via mobile phone to inform the doctor that the pill has been taken. Sensors on the chip also detect heart rate and can estimate the patient’s amount of physical activity. More than just a way for doctors to look over their patients’ shoulders, it will allow doctors to better assess if a person is responding to a given dose, or if that dose needs to be adjusted.
After clinical trials that began in 2009, the FDA approval follows approval from European regulatory approval in August 2010. Right now the FDA has only approved the chip for placebo pills, which were used in trials showing the chip to be safe and highly accurate. Proteus hopes to gain approval to use the digestible chip with other medicines. Andrew Thompson, chief executive of Proteus, says the chip has already been tested with treatments for tuberculosis, mental health, heart failure, hypertension, and diabetes.
The company is currently working with makers of metformin, a drug used to treat type 2 diabetes and the most commonly prescribed drug in the world. The company also plans on adding a wireless glucose meter to their device so that dosage amount and frequency can be correlated with changes in blood glucose levels.
They would also like to digitize the drugs taken to treat neurological disorders. Disorders such as Parkinson’s Disease and Huntington’s Disease often require patients to receive drugs regularly – sometimes several times per day – and for extended periods of time. Ensuring that these patients are adhering to the prescribed regimen could greatly improve quality of life for some.
Transplant patients, who often have to take immunosuppressive drugs for long periods following surgery, could also potentially benefit from digitizing their medicine.
Ingestible body sensors have been discussed for a while now, but Proteus’ digital pills are the first ingestible sensor to be approved by the FDA, according to Nature. This first step toward regulated ingestible sensors will undoubtedly be followed by others. The Programmable Bio-Nano-Chip developed by Rice University scientists can detect heart disease or cancer from a saliva sample. If the chips were ever permanently implanted into the body, they could provide an early alarm system for these diseases long before symptoms are detected by the patient. Scientists at Tel Aviv University in Israel and Brigham & Women’s Hospital in Boston are developing a pill-sized robot that is remotely powered by an MRI machine to swim through the gut and look for the molecular signs of gastrointestinal cancer.
The first demonstration involved a placebo, but surely drug companies are eager to digitize their pills – and make sure patients empty out their prescriptions when they’re supposed to. Although possible, it is hard to imagine a complication would arise when the device is used with, say, Lipitor, that did not arise with the placebo. The usual FDA bottleneck could be loosened with the first incorporation into a bonafide drug.
The possible uses for ingestible sensors is as varied as the body itself. As with computer chips, ingestible chips will follow the exponential path of Moore’s Law and be able to sense more with less in the future. The FDA ruling could do much to get the technology on the fast track.
Scientists Reconstruct Brains’ Visions Into Digital Video In Historic Experiment
UC Berkeley scientists have developed a system to capture visual activity in human brains and reconstruct it as digital video clips. Eventually, this process will allow you to record and reconstruct your own dreams on a computer screen.
I just can’t believe this is happening for real, but according to Professor Jack Gallant—UC Berkeley neuroscientist and coauthor of the research published today in the journal Current Biology—”this is a major leap toward reconstructing internal imagery. We are opening a window into the movies in our minds.”
Indeed, it’s mindblowing. I’m simultaneously excited and terrified. This is how it works:
They used three different subjects for the experiments—incidentally, they were part of the research team because it requires being inside a functional Magnetic Resonance Imaging system for hours at a time. The subjects were exposed to two different groups of Hollywood movie trailers as the fMRI system recorded the brain’s blood flow through their brains’ visual cortex.
The readings were fed into a computer program in which they were divided into three-dimensional pixels units called voxels (volumetric pixels). This process effectively decodes the brain signals generated by moving pictures, connecting the shape and motion information from the movies to specific brain actions. As the sessions progressed, the computer learned more and more about how the visual activity presented on the screen corresponded to the brain activity.
An 18-million-second picture palette
After recording this information, another group of clips was used to reconstruct the videos shown to the subjects. The computer analyzed 18 million seconds of random YouTube video, building a database of potential brain activity for each clip. From all these videos, the software picked the one hundred clips that caused a brain activity more similar to the ones the subject watched, combining them into one final movie. Although the resulting video is low resolution and blurry, it clearly matched the actual clips watched by the subjects.
Think about those 18 million seconds of random videos as a painter’s color palette. A painter sees a red rose in real life and tries to reproduce the color using the different kinds of reds available in his palette, combining them to match what he’s seeing. The software is the painter and the 18 million seconds of random video is its color palette. It analyzes how the brain reacts to certain stimuli, compares it to the brain reactions to the 18-million-second palette, and picks what more closely matches those brain reactions. Then it combines the clips into a new one that duplicates what the subject was seeing. Notice that the 18 million seconds of motion video are not what the subject is seeing. They are random bits used just to compose the brain image.
Given a big enough database of video material and enough computing power, the system would be able to re-create any images in your brain.
^In this other video you can see how this process worked in the three experimental targets. On the top left square you can see the movie the subjects were watching while they were in the fMRI machine. Right below you can see the movie “extracted” from their brain activity. It shows that this technique gives consistent results independent of what’s being watched—or who’s watching. The three lines of clips next to the left column show the random movies that the computer program used to reconstruct the visual information.
Right now, the resulting quality is not good, but the potential is enormous. Lead research author—and one of the lab test bunnies—Shinji Nishimoto thinks this is the first step to tap directly into what our brain sees and imagines:
Our natural visual experience is like watching a movie. In order for this technology to have wide applicability, we must understand how the brain processes these dynamic visual experiences.
The brain recorders of the future
Imagine that. Capturing your visual memories, your dreams, the wild ramblings of your imagination into a video that you and others can watch with your own eyes.
This is the first time in history that we have been able to decode brain activity and reconstruct motion pictures in a computer screen. The path that this research opens boggles the mind. It reminds me of Brainstorm, the cult movie in which a group of scientists lead by Christopher Walken develops a machine capable of recording the five senses of a human being and then play them back into the brain itself.
This new development brings us closer to that goal which, I have no doubt, will happen at one point. Given the exponential increase in computing power and our understanding of human biology, I think this will arrive sooner than most mortals expect. Perhaps one day you would be able to go to sleep wearing a flexible band labeled Sony Dreamcam around your skull. [UC Berkeley]
#AugmentedReality #Transhumanism #Google’s ‘Project Glass’ is the future of wearable computing
Google Glass isn’t a real thing, and it probably won’t be for a while. But the technology is there, the resources are there, it’s just a matter of putting them all together into a product that people would feel comfortable buying and wearing. I use my phone all the fucking time to the annoyance of many around me, but I don’t know if I would want to be this immersed. Maybe. At least I could keep eye contact at dinner while I’m reading email and RSS feeds.
Submitted by InformationDesk
NASA and GM are working together on a robotic glove that can help make working with your hands a little easier. Officially called the Human Grasp Assist device — but also known as either K-glove or Robo-glove — the glove reduces the amount of force needed while using a tool, which decreases the chance of repetitive stress injuries. This is important for both NASA and GM, as the glove could potentially be used by both astronauts and auto assembly workers who handle tools for long periods of time.
The two organizations previously worked together on the Robonaut — which recently managed its first handshake in space — and the glove actually borrows some of that technology. It includes actuators in the fingers for a better grip, and pressure sensors to determine when you have a tool in your hand. Right now the Robo-glove weighs about two pounds and can reduce the amount of force needed for a given job by around half. So if a job takes 15-20 pounds of pressure, it could be reduced to as little as 5-10 pounds. The team is currently working on its third prototype, which will focus on making the glove smaller and lighter — so it looks like we’ll still have to wait a while to get stronger through Deus Ex-style human augmentation.
Surveillance is developing in more and more domains and at an extremely rapid pace. Surveillance cameras are obviously involved, as are miniaturized cards, portable telephones, the growing number of recording devices of all kinds, the Internet and electronic “cookies.” This is the era of Big Brother! Today, when cameras equipped with face recognition software add their specters to the pantheon of the failed illusions of security, the government is trying to pass liberty-killing laws under the fallacious pretext of the “fight against terrorism.”
Here, we are made to live in the psychosis of continual control: filmed, surveilled and filed all day, as if we are all criminal suspects, and asked to accept the “fact” that — in the name of our security — men, women and children will have to be killed. We denounce those truly responsible for this masquerade, those thirsty for political power who do not hesitate to use demagoguery and opportunism to inflame the fears of “the Other” and who, even before September 11, were playing the “Total Security” card in an attempt to get votes. We demand the rejection, from now on, of politics in the service of the maintenance of the market — economy and social inequities, of politics that have as their guiding principle the enslavement of the general population and the restriction of human possibilities.
We hope to live in a different world, one in which we don’t have to submit ourselves to the government-subsidized industrial companies that pollute our air, land and water, that rapaciously enrich themselves by riding the backs of workers, those in precarious socio-economic situations, and that set up the market in the surveillance of human beings. The images of money-traffickers and fiscal paradises, political operatives who can act with total impunity, and deal-makers working in the rich soils of the powerful will not be captured by surveillance cameras, despite the facts that they are the ones who are responsible for the world in which we are forced to live, and who should be held accountable for it.
The supermarket is surveilled, as are the streets, offices and factories. What a plethora of images! And why are they captured? In the supermarket, each movement and gesture of the apathetic consumer is filmed and analyzed so as to discover the unknown factor that will facilitate the sale of mad-cow-infected meats, spoiled cheeses, and aseptic chickens. At the office and at the factory, we are surveilled in the name of profits; in the street, we are surveilled so that we never lose the sense of being watched! For what purpose? To force behavior to become normalized; all movements other than normal become suspicious. When will we address ourselves to the real problems, the ones that erode our capacity for life?
When will we have the intelligence — which is lacking in this society, which turns in the wrong direction — to refuse to accept these conditions, neither for us nor for the generations to come? The progress of digitalization and computerized information profits the type of social control that we fear will exist in the future. Aren’t people already enmeshed in the gears of the market, which without hesitation supports every political manipulation so as to have servile consumers? We say “no” to the liberty-killing laws that would legalize this fuckery.
We reclaim the right to possess “disguises.” We reclaim the right to a private life. We reclaim individual freedom, not simply the freedom to exist, but all freedoms.
— Collective for Individual Freedom in the Age of Information Technologies
A divided federal appeals court panel has upheld the constitutionality of California’s DNA “test on arrest” policy, which is building a massive database compiled from the DNA of people arrested for felonies in the Golden State — regardless of whether they are ultimately convicted of anything.
The “test on arrest” policy has been endorsed by President Barack Obama, who has encouraged states and federal governments to link up their databases in order to solve crimes. Law enforcement officials say DNA databases have solved numerous crimes, including murders and sex assaults.
In a 2-1 decision issued Thursday (and posted here), the U.S. Court of Appeals for the 9th Circuit ruled that collecting and maintaining the DNA sample —obtained from swabbing the inside of an arrestee’s mouth — does not violate the Fourth Amendment’s protection against unreasonable searches and seizures.
“The physical extraction of DNA using a buccal swab collection technique is little more than a minor inconvenience to felony arrestees, who have diminished expectations of privacy. Moreover, it is substantially less intrusive, both physically and emotionally, than many of the other types of approved intrusions that are routinely visited upon arrestees,” Judge Milan Smith wrote in an opinion joined by Judge James Todd, a district judge assigned to the appellate panel.
Smith noted that the database contains just some markers from individuals’ DNA (though the original samples are also retained) and that the law limits use of the data to trying to solve crimes.
Predictive Programming and the Human Microchipping Agenda #Biometrics #HumanInventorying #Transhumanism
Predictive Programming and the Human Microchipping Agenda confirms the reality of the microchip agenda, and shows that the weapon of propaganda has been used against the public for decades in order to familiarize us with the idea of being chipped. This process is called predictive programming and its purpose is literally to program the mind of the victim so as to accept without question whatever is required by the programmer - in this case, the idea of being microchipped at some point in the future. The victim is generally unaware of being programmed, believing that it’s all just harmless entertainment. For this reason it can be a powerful and effective weapon against us. By explaining this process and giving example after example, Predictive Programming and the Human Microchipping Agenda is an attempt to alert the viewer to some of the ways in which we have been manipulated throughout our lives for the specific purpose of slowly but surely shepherding us all into a Hellish world of microchip implants and totalitarian control. We hope that by exposing the programming we can break the program and derail this diabolical agenda. To be successful we need your help. WE THE PEOPLE WILL NOT BE CHIPPED! Join the movement at wethepeoplewillnotbechipped.com
Join the “Say No to UID” campaign on Facebook
It appears that Nilekani, the co-founder and former chief executive of Infosys Technologies Ltd, India’s second largest software company, has misled the key functionaries of Government of India into believing that he is deeply concerned about reaching the poorest of the poor with a 16-digit card (4 numbers are hidden?) to liberate them from poverty.
This proposed UID legislation authorizes the creation of a centralized database of unique identification numbers that will be issued to every resident of India but has failed to provide for provisions that precludes abuse of such a database for invading citizens’ rights to privacy and freedom of choice by national and transnational corporations like Vedanta and IBM. The legislation poses one of gravest threat imaginable as far as citizens’ right is concerned. It will damage citizens’ sovereignty beyond repair and has the potential to cause holocaust like situation in future through profiling of minorities, political opponents and ethnic groups.
Augmented-reality eyewear is the next step toward a future in which we never again have an unmediated view of the world
Google announced yesterday that before the end of 2012, you will be able to buy augmented-reality smart eyeglasses from the search giant. The Android-powered glasses will have an onboard camera that monitors in real time what you see as you walk (or, heavens preserve us, drive) down the street. The lenses will then overlay information about people, locations, and whatnot directly into your field of view.
We knew this day was coming, but I certainly didn’t suspect it’d be so soon. Never again will you have to wonder Where is the closest Pizza Hut? or What make of car is that? or Don’t I know her from somewhere? Ubiquitous smartphones have already given us the ability to swiftly look up information with only a moderate disruption. Smartglasses completely remove the mediating step of pausing to wonder and ponder and research: data is simply there, an inseparable part of your visible world.
Overlay Google Maps onto the real world, and navigation becomes effortless. Overlay reviews and menus onto restaurant storefronts as you pass them; overlay nutritional data onto your plate as you eat; overlay purchasing info if you particularly admire your co-worker’s new shoes; overlay translations of foreign signage, breaking news, hilarious kittens romping at your feet.
As smartglasses become popular, the world will start to seem naked and inaccessible without a glossy data layer on everything.As smartglasses become popular, the world will start to seem naked and inaccessible without a glossy data layer on everything. Everyday activities, maneuvering through the physical world, socializing, working, learning, will all be increasingly eased by the use of glasses; increasingly, until these activities start to feel almost impossible without the glasses. Who’s going to have patience to laboriously explain facts to a non-data-overlaid person? Give you my business card? Point you in the direction of Fifth Avenue? I don’t even remember how to spell my name! Where are your Googles?
Will businesses see the need for physical signs and billboards? Will municipalities bother to maintain physical street signs and traffic signals? Will smartglasses make the university lecturer’s blackboard and salesman’s PowerPoint obsolete as well?
What comes after that? With everyone wearing glasses (or, at this point in the future, contact lenses or implants), individual appearance becomes as malleable on the street as it is now on the Internet. You can overlay your real body with a digitally altered one, saving money on subtle nose surgery or just completely living life as a furry avatar.
What, though, will it take to get us to that tipping point, when head-up augmented reality suddenly shifts from a novelty to a ubiquity? Wearing cumbersome goggles on your face as you proceed through your day is a bit more of an intrusion than I for one am ready for. Sony’s 3DTV goggles are impressive and designed only to be worn in the comfort of your couch, and still I have yet to meet someone who owns a pair. The gear will have to be small and easy to integrate with your basic life processes. Perhaps AR windshields in our cars will become common first, before we put them on our faces.
But, however it comes — the fully mediated future has begun.
(PhysOrg.com) — Austin, Texas-based Chaotic Moon Labs made a splash earlier this year with a high-tech Kinect-controlled skateboard moving by the rider’s hand signals. Now they are showcasing another skateboard that moves beyond Kinect power and hand signals, over to a board that moves by just reading your mind. Think where you want to go and your board takes you there. From their Board of Awesomeness, their newest Board of Imagination is designed to show another twist to skateboard inventiveness and also to what travel might involve with enough technical ingenuity and creativity at play.
Rather than just calling the new skateboard Board of Awesomeness V2, the creatives decided their new invention was no mere revision, but instead a skateboard worthy of its own name.
The Board of Imagination is a skateboard that carries the same Samsung tablet with Windows 8 and the same 800 watt electric motor as the earlier skateboard, but now sports a headset. With it, the board will read the rider’s mind and will move anywhere the rider imagines.
The skateboard can translate brain waves into action such that the user visualizes a point off in the distance and thinks about the speed in which to travel to get there. The skateboard does the rest.
The mind-reader for the device is the EPOC headset from Emotiv. Described as a “neuroheadset,” the Emotiv company has produced a device that serves as an interface for human-computer interaction. As part of the new skateboard, the headset can control the rider’s speed and braking.
According to its site, Emotiv is a “neuroengineering” company. The motto is “you think, therefore you can,” which reflects on their devices, which can control objects just by thinking about them. The headset conveys brainwaves that generate signals to the tablet. The software on the tablet interacts with the skateboard and the rider moves along. When the rider wants to stop the board, no hand signals are necessary, as in the earlier Board of Awesomeness. The rider just thinks about the upcoming point of arrival.
The Chaotic Moon Labs team currently has no intentions of commercializing the skateboards. They are, though, planning to open-source the code, and also to provide information about materials and the cost of goods so that others can build such boards, ”provided we get the clearance from our attorneys first,” general manager Whurley told CNET. (He goes by one name only.)
The Chaotic Moon Labs website talks about what motivated them to do another high-tech skateboard. According to the site, “Whurley likes to ride in the gorgeous Austin sunny days so the obvious thing to do was look at the Kinect sunlight problem!” The labs team instead decided to take the direction of a skateboard whose movements could be controlled via brainwaves.
More information: http://www.chaotic … imagination/
So what’s the next step, I asked? Do people start wearing biometric tokens that send signals to devices in the neighborhood, letting you know when you’re in their vicinity so they can respond by tweeting you to please buy them?
Sure, why not, comes the swift response from Salesforce.com CEO Marc Benioff. Last August, as regular ReadWriteWeb readers will recall, Benioff astounded his audience at the Dreamforce conference with the mind-alteringly imminent notion that Coke machines should become aware of their customers’ presence, and respond through their iPhones with bargains and loyalty points. Of course, Benioff’s idea at that time relied upon the customer always having his iPhone with him. This time, at the Cloudforce conference in New York this morning, Benioff one-upped his own idea with the notion that a biometric bracelet could supply interested products and devices in the wearer’s immediate vicinity with a kind of identity signal.
Benioff’s suggestion was brief and simple: Not just applications, but people working remotely, can get a better understanding of customers’ needs if they had vision into the context of where they are and what they’re doing. As demonstrated earlier in the day, a financial sales team might have immensely greater comprehension of the urgency of a customer’s needs if they were to see that she was at the bank, that she was talking to a loan officer, and that she had started filling out the paperwork for a mortgage application.
You’re not being very helpful
The roadblock preventing that sales team from knowing this information already lies with the customer’s ability or willingness to share. Follow my logic, if you will, as it weaves its way past a dense forest of psychology and pathology. Not everyone tweets everything, you see. And that’s a problem, because your needs while you’re at a gas station or a coffee shop may be very different from when you’re starting to fill out a mortgage application. What’s keeping you from tweeting, “I’m filling out loan paperwork?” Is your keyboard not big enough? Is there not time enough in-between your other tweets where you reveal that you were in the car, and that you got out of the car? Has the proper hashtag “#MORTGAGEAPP” not been created yet?
"Products need to become much more social," says Benioff. But they can only do that if people can talk to products, and people don’t talk to products. At least they don’t now. This is where the bracelet comes in. It could talk to products in the language of products. Maybe it can tell your car that you’re standing next to it. Imagine if your ignition key only worked for you but not for anyone else? Your biometric bracelet could identify you as the proper bearer of the key. If someone stole your bracelet, it wouldn’t work for that person because the bracelet might know your fingerprint and your heartbeat rhythm.
This vision starts to make sense. Imagine if a first responder station could immediately respond if you were in an accident and couldn’t reach the OnStar button yourself. Imagine being able to audit the location(s) of your teenage son throughout the day and night. There is enormous benefit to the notion of something being able to signal who and where you are.
From information in isolation, it becomes academic to move to information in the aggregate. How many teenage boys within a given county or district are outside of school between the hours of 1:00 and 2:00 pm? How many get into accidents? What are they driving? Just minutes before Benioff showed off his bracelet, Salesforce senior vice president Kraig Swensrud demonstrated Radian6, the company’s tool for displaying real-time, live social data about interests, activities, and conversations. With Radian6, you know which customers are talking about what products and when. And it’s just as academic to move to information in the aggregate. How many customers over 40? Male? At home? At the bank? Filling out a mortgage application?
Where to put the filter
This is tomorrow’s dilemma, the one that faces society when, as Benioff predicts, it abandons e-mail in favor of persistent, live connections. In today’s social networks, there are “groups” (Facebook) and “circles” (Google +) that let users establish their own filters, for restricting the degree to which information gets automatically shared. How will we establish similar filters once it becomes possible for our location, our present activity, and our heart rate to become broadcast to every salesman on the planet? And how do we decide the extent to which our 24/7 broadcast of personal information gets aggregated into bar charts and pie charts? We may tell ourselves that no one can tell who and where we are from an aggregate chart, but in reality, it’s just as academic a process to drill down as it is to build up.
So do we create “circles” for who gets our heart rate and who doesn’t? Doctors, certainly. Bankers, maybe. Politicians, probably not. And how do we decide who these people are who deserve to see our personal data? Perhaps we could create rules. Only bankers in our county? Only doctors affiliated with folks we know? Maybe our friends have gone to certain doctors before. Perhaps we can find that out.
Maybe if we do some drilling down ourselves. Let’s see a map of all the doctors my friends have ever visited. Let’s see how well they rated. On second thought, let’s see how well they rated among folks I care about. Do you suppose this doctor follows his own exercise regimen and goes jogging every afternoon? Between 2 and 5? Let’s find out.
If that’s not something we’re permitted to know… then for heaven’s sake, why? Why would a legitimate businessperson put a filter on his personal information? Doesn’t he want to attract customers? Doesn’t he care about his business? How will we learn more about this person? To paraphrase Marc Benioff, how can we become friends with our products?
Perhaps if we ask the products themselves. Surely there must be aggregate data available from the stores he’s visited, the clothes he’s tried on, the Coke machines he’s walked past. And now you see what I’m getting at. Once you place something in the public domain, you can’t bottle it up any more. If the cloud “knows” where you’ve been, and your filter says you don’t want certain people (or things) knowing about it, what is to stop an agent from deducing this information from other sources with which you’ve had contact, including (and especially) other things over which you have no control?