Saturday, October 31, 2009
Wednesday, October 28, 2009
Wednesday, October 21, 2009
News came earlier this week that the both of the relevant search engine players, Google and Bing, have reached an agreement with Twitter to start incorporating user-generated content from the micro-blogging platform (i.e. user's tweets) into search query results. Additionally, Google has announced a new “social search” functionality for their search results, to be rolled out in Google Labs next week. In effect, the new Google social search will provide content from various social media sites at the bottom of each page of search results.
This combination of social media and search functionality delivers a power tool, namely the ability to perform a real-time “social search”. Using this tool, individuals can obtain a much clearer sense of real-world, current (or “trending”) topics [the TechCrunch article labels this the "pulse of the planet"]. Culling information from social media sites and incorporating such information into search can be used to spotlight important events, uncovering what people are actually talking about right now. Additionally, what better way to gather up-to-date, personal information about a particular person than by Googling their name, and then being delivered text, images, and videos authored by or directly involving that person (as a law student interested in litigation, I can only imagine the limitless possibilities for pre-trial discovery!).
Herein lies a (potentially) major privacy concern. It may seem hard to fathom, but there are still people who are surprised when information that they enter into Facebook, or onto Twitter, surfaces in undesirable ways on the Internet. YES! Should you choose to upload that picture of you 'totally dominating the beerpong table', it will most likely be searchable (discoverable by friends, employers, lawyers, your mother...). Real-time, social search simply removes an accessibility barrier that these social platforms had, inherently, previously established and contained. Now, someone does not have to individually search every single social site, and they do not necessarily have to worry about matching a person's user name to their 'resume name.'
Granted, this concern may be premature and may in fact never ultimately develop into a major issue. It is likely (hopefully) that the social media sites and the search engines will work together to pre-bake privacy settings into the actual functionality of social search. An easy way to opt-out of search results from the social media side of the equation would be a good start (or require that you opt-in in order to allow your social media content to be displayed to others within search results).
While privacy controls may be forthcoming, these search engine/Twitter partnerships and the Google social search announcement highlights the burgeoning trend towards opening up user information online. The scope of what you can find out about someone, or about any given organization, through search is constantly expanding. While younger generation might expect and accept these privacy implications, older and more cautious (sensible?) individuals may not be eager to see their continued progression.
Tuesday, October 20, 2009
Last night, I was in attendance at Santa Clara University, where Professor Jonathan Zittrain delivered about an hour-long presentation titled, “Minds for Sale: Ubiquitous Human Computing and the Future of the Internet.” This long title for his presentation may have been slightly misleading; much of the pre-lecture buzz from audience members was commentary focused on the already ubiquitous nature of computing (“Look at that kid (me), he has one of those silly Kindles”). People generally seemed to think that the subject matter of the talk would center on issues discussed in Zittrain's popular book, “The Future of the Internet, And How to Stop It.”
I feel that the short title of the presentation, “Minds For Sale”, was much more befitting of the lecture. Zittrain focused on what (in his view) is an worrisome developing trend. The Internet, as a technological platform, has created myriad ways for people to hire other people to complete certain tasks or solve specific problems. He cited several pertinent examples, including: 'Innocentive', a website that provides a platform for large companies to post problems that they are willing to pay (~$20,000) scientists to solve...Amazon's Mechanical Turk marketplace, where people are paid tiny sums of money to complete a diverse range of "HIT's," Human Intelligence Tasks...and re-CAPTCHA, a program that requires the solution of a CAPTCHA(stop spam...), but also helps to decode scanned books that happen to be of poor quality(...read books).
Zittrain enumerated several specific concerns he maintains, regarding why this trend is so unsettling. From the participant(worker) point of view, concerns included surveillance/privacy issues, alienation and moral valence concerns. He noted a general, systemic concern, what he feels is a 'race to the bottom' type effect regarding the state of labor laws surrounding these transactions (e.g. people are making well less than minimum wage while “working” on Mechanical Turk "HIT's"). Another major concern discussed was the growing disconnect that occurs between the people seeking others who are willing to work or to solve a problem, and those actually doing the work or creating the solution.
His most poignant example of the night might have been a Colorado smoke-out, where police could not arrest every participant, but had taken pictures of the individuals who were smoking marijuana. Later, law enforcement posted the pictures on-line and offered a bounty paid to anyone who could identify these smoke-out participants. He used this example to transition to a scenario wherein a government could easily use the Internet as a platform to identify and silence political protesters. He imagined a Mechanical Turk "HIT" where an individual would be paid two cents to correctly discern whether a photo portrait of a protester matched up with an identification picture.
Yet, it was not entirely doom and gloom. Zittrain went on to offer several potential solutions, including the adoption of stricter (or looser) labor standards, a widespread adoption of various opt-out opportunities and the ability for 'workers' to create some sort of portable reputation status in exchange for the 'body of work' they create on a given platform. Perhaps his most timely suggested solution, was simply to urge more general disclosure throughout all levels of these transactions (he hinted that he approved of the FTC's forthcoming guidelines calling for sweeping blogger disclosure).
During the Q & A, Santa Clara Law Professor Eric Goldman raised a pressing question. He challenged the technological development of the Internet as an enabling platform, as being one that is not quite as troublesome and 'dangerous' as Zittrain urged. Professor Goldman pointed out that paying people to do things that may be morally questionable is certainly not a new phenomena. The Internet, and its communities of users, is capable of self-regulation and can effectively prevent any of the more "threatening" scenarios from becoming accepted practices on the web. Zittrain somewhat dodged the issue, and instead polled the audience as to whether or not, after just listening to his presentation(fair or foul?), they felt 'threatened' by this trend. About two-thirds of the audience raised their hands, siding with his viewpoint.
I believe that the "future of the internet" (to borrow from Professor Zittrain), lies somewhere in the middle. Human nature and morality, our willingness to do just about anything unscrupulous if we are paid enough money, is inherently what it is. There will always be the "bad guys". Granted, it is unnerving that the Internet adds a global scale and an increased potential scope of behavior (think of the limitless possibilities for defining any given "HIT"), into the mix.
What Professor Zittrain seems to gloss over, however, is the community and personal aspect undergirding the Internet. The fact that the intermediaries, the ones who set-up and run these platforms and who must be charged with policing these types of transactions, are people too. Amazon or Google or Innocentive must be the ones who ensure that this technological platform is not abused.
The bottom line is that in opening up a marketplace, where we can offer our minds for sale, can turn out to be extraordinarily beneficial to any number of parties. Yet, it is critical that, collectively, we all practice what Google preaches - “don't be evil”.