Response to the NSA

The article reveals an interesting side of NSA and helps explain the start of what I would call a Spying Empire.

The only reason the NSA kept recycling is it is an effective tool to fight against terrorism, and the logic behind that is extremely fallacious. According to the article,  The logic that the NSA uses to advocate its action is “that if today’s surveillance programs existed before 9/11, it might have been able to stop those attacks.” which is unreasonable because as we have mentioned in the previous discussions, the biggest challenge for human in the past was not the data itself, but the technologies available back then to analyze them.

The technologies in 2009 could not timely analyzed the information the amount of information NSA has collected today, and thus, their system might get overload and malfunction or they would pass the needed information to prevent terrorism. Thus, data is not enough but it also requires technologies, and if NSA argues about bringing back today’s technologies to 2009, they might as well bring today’s weapons back to prevent world war II and save over 60 million lives of human, or vaccines back to prevent various plagues and save hundreds of million lives. Hence, arguing about what you could do in the history with today’s technologies is an invalid cliche. If the NSA want to the right to use our information, it better proves to the society, with explicit examples, the benefit we gain, not just a sole sentence about terrorism fighting.

Hence, if terrorism is not the NSA’s first concern, what else might it be? In my opinion, it goes back to our debates about the fight over power and money. The NSA might start this project due to good causes, to against terrorism is one of them. However, as the NSA grows, the amount of data it has collected also grow tremendously, and thus, money has to be spent for creating data storage, developing mechanism for analyzing data, and hire people, the experts, to help them analyze those data,etc. It costs a lot of money, and the government’s money(more accurately, US’ tax money, the people’ whom it is spying on money)  is fueling this Spying Empire. But government doesn’t want to give out money for free to NSA anymore, because once you have solved the problem, which is the terrorism or several other good causes cannot be named by the NSA, there is no longer any justification for it to grow/get the money. Hence, NSA has to create new goals, something they can do with the extra data they have. This part is arguably connected to how Google used their data in the past and now (with AdWords) where they purposely used the data collected for their own uses.

At this point, I can see no reasons to let NSA keeps spying on my information as well as other people’ data. However, it is a hard answer to other people because it is many lives that are put on the scale and even with a slightest chance that NSA can use our information to save those lives, it is  difficult for us to waive it.

 

Reaction to the NSA readings

 

I will start my reflection by stating outright that I take no position, for or against, on the NSA’s electronic mass surveillance program, nor do I have a personal opinion on the matter. It is a complicated issue, and I do not feel that a few public statements and editorials are enough for me to make an informed decision on the matter.

 

On one hand, the very idea of an agency of the government monitoring our private communications to assess how threatening we are is so Orwellian, that one’s natural first reaction as a citizen of a democracy is to cringe in horror. The belief that our privacy is a fundamental right, one that should not be so easily curtailed, is so strong that it is written into the constitution “The right of the people to be secure in their persons, houses, papers, and effects… shall not be violated… but upon probable cause…” The fourth amendment is very clear: a reason for a search or seizure must be ascertained before the act itself, and the particulars of the search must well defined. Ignoring all other acts, laws, and legal decisions since 1776, the actions of the NSA are illegal.

 

However, the original edition of the fourth amendment was written before a time when a criminal or a mentally ill person could, with only a minimum of assistance and resources, do great harm to many people. The constitution was never meant to be a fixed document, slowly becoming obsolete in the face of new technology and new situations. The fact remains that the world has changed, the modern enemies of the modern enemies of the U.S. don’t wear one uniform or fly one flag, and most don’t even take orders from the same place. Decentralized terrorism has made traditional means of intelligence gathering ineffective. We live in a new age, with new tools that have the potential to both threaten and safeguard the security of this nation.

 

We cannot deny ourselves any weapons in the struggle for security based on words that were never meant to address every situation. But we also cannot just throw away the liberties that we hold so dear, just because we get scared of what might happen otherwise. The U.S. public has to come to a modern conclusion as to how much surveillance is too much, we’ve put it off for thirteen years and now were paying the price.

 

Setting aside the current debate, there is a fact that should be considered far more often than it is: there is are precedents of this situation in U.S. history. During the civil war, the Union government routed vast portions of the telegraph office through the office of the secretary of war, he had an unprecedented amount of authority to both view what was being sent, as well as stop telegraphs from being delivered.

 

A point of special importance; every time that a surveillance program is shut down after being deemed too extreme, it has taken a larger public outcry. Lincoln’s telegraph espionage was shut down from within the U.S. government. Shamrock was ended after a public outcry but now before. Today, we are in the aftermath of a major whistleblowing event. (That term is awkward to me as well, but I couldn’t think of a better one) This event has had major internal as well as international repercussions but, as far as we know, no major changes have been made to the NSA’s policy of mass surveillance, not has that initial outrage by the American public manifested itself since. The question can be asked: is the increasing resilience of mass surveillance programs a reflection of the fact that we increasingly live in a world where we need them?

Reaction to NSA Articles

The NSA Files revealed a lot about the workings of the NSA that I previously did not know. What Snowden was able to expose about the NSA was incredible. For years Americans have been recorded, spied upon, and lied to by NSA officials. One moment in the NSA Files that struck me was when the quote “If you have nothing to hide, you have nothing to fear” appeared. This quote seemed like it would be taken out of “1984” or some other dystopian novel in which Big Brother watched over and surveilled its citizens. It does not seem like the type of thing that would be said in the United States. Yet this quote was used to describe the workings of the NSA, which truly revealed how intrusive the NSA has become.

On November 3 we looked at the piece The Assault on Privacy by Arthur Miller. In it, he warned of the dangers of computers and the ways in which computer technology would threaten informational privacy. As we discussed it, no one seemed too threatened. The ideas of passwords and encryption seemed like enough protection for us. But for Miller, it wasn’t. Instead he explained that the only way to truly protect our information was to have trustworthy information managers with a code of ethics and to eliminate the collection of sensitive information. These protocols seem almost obvious in that they should be done, yet through the NSA readings, it seems like these were not done. For example, Jameel Jaffar explains that the government is collecting extremely sensitive information about people to learn the associations between people. The NSA has tapped into links to access information about millions of Americans and has exploited the law to do so. This is both shocking and disturbing. If we are supposed to be living in a democracy with freedom of speech, how can surveillance of this kind be allowed?

When reading about the NSA I thought it was incredibly interesting how the article and the website looked at the NSA and the government as an enemy. For example, Eric Grosse said “It’s an arms race. We see these government agencies as among the most skilled players in the game.” It is not often that the government is portrayed so openly and so publically as a competitor or rival. Yet in the articles, the NSA’s infringement upon American lives made it the true enemy.

Response to the Chomsky Debate

Chomsky makes an valid point on the limits of statistical modeling and theorizing. Despite centuries of research and work, mankind can only model the smallest of natural processes with any accuracy and is forced to resort to gross simplification when modeling large ones. It is tempting to say that modeling and theorizing are just book-keeping: a way of tying up loose ends after the “real” science is done.

Conversely, traditional experiments, while generally precise, are an inefficient way of discovering new facts when dealing with large systems with many variables. Because of this, statistical models become necessary to narrow down and pinpoint the key factors in a system for closer, experimental, study.

Frankly, rather than pick a side in the modeling vs experimenting debate , I’m inclined to agree with Millikan, who stressed the importance of both theory and observation in the advancement of science. Without observation, theory is impossible, and without theory, observation becomes useless. Arguing about whether experimental research or theory is more important is like debating the relative merits of a car’s engine vs it’s steering column; it’s true that one is flashier and more fun to work on, but without both the vehicle cannot operate.

What must not be forgotten however, is that the field of statistical “Big Data” modelling is improving all the time, indeed the rise of Google and its search engine empire in the past decade lend credence to the idea that we are only scratching the surface of what can be accomplished by this technology. Big Data analysis promises to be a powerful new tool for science, both in the collection of raw data, and in its synthesis.

Response to Norvig and Chomsky debate

This debate is, in my opinion, quite useless as we can see it’s the difference in the way this 2 men define the world of science. Chomsky believe that the study of Science means to give an insight of the world while Norvig support the ideas that we must using models, which portray the insight, and then deduce the insightful theory subsequently. However, in my opinion, it is a nobody win situation where their ideas can go on forever.

First of all, the dispute over the “insight” and “description” is meaningless because Chomsky himself also declared in the interview that “some of the modules may be computational, other may not be” and it is exactly the counter of his theory. Some part of the science can be “insight”, while other may not be and thus, scientists need some replacements, something can help them to observe as accurately as possible to come up with the final “insight” and the “description” is there. The other way round is easier as Norvig himself also claims the importance of “insight” in science and just want Chomsky to admit the importance of “description” in science.

Secondly, It’s quite clear that, despite learning language is an innate ability, learning language is a process of absorbing knowledge. Most studies in the world have shown that it is better to learn a language through listening and reading- that is, we do not learn a language by hard studying and applying it, or memorizing and producing it. Rather, we try to understand the knowledge we are taking in, through the comprehensible input collected through interacting with the world, and develop our vocabularies and grammars on those bases. That’s why, in my opinion, the difference between an adult and a child is not how their brains treat the language (as a puzzle or as a language, according to Chomsky), but their learning environments. One truth is that children are better at mimicking and not preoccupied by the learned languages like adults, but those reasons only help them better than adults at pronunciation, which is irrelevant to computers. Adults have averagely, more challenging environment than what most children have. A child, since it was born, has nothing to do but leaning a language while an adult would have other concerns, together with learning a language, take over his life. The devotion is much less while the expectation is more, adult’s standard of fluency is much higher than a child’ standard, and thus lead to the presumption of a child is better at learning a language. Computers, likewise, acts as an infant demanding the feeds of knowledge- the set of parameters- before being able to communicate.

The question,then, is to find out the human learning process that we can mimic for computer.From my experience, most of my friends who live in English speaking countries will score better than the foreign students at grammatical multiple choice questions, but seem less likely to be able to explicitly explain the reasons behind their choices.The answer mostly received would be “because they/it sound(s) familiar”. Where does that “familiar” come from? Studying language naturally when you are growing up is a progress of hearing-data inputting- and recalling when heard again- internally processing of data. Thus, it is not wrong for Chomsky to criticize the ineffectiveness of old Markov’s chain, because it fails to recognize the familiar trends which help us differentiate the grammatical versus ungrammatical. However, as Norvig said, it is a 50-years-old mechanism that I believe, in this fast changing world, will soon replaced by better probabilistic and statistic mechanism. On the other hand, Norvig has also shown that even with the old Markov’s chain, probabilistic models are still better than Chomsky’s theory at the degree of sensibility of a sentence than while Chomsky’s is better at differentiating the grammatical/ungrammatical. The irony here is, the algorithmic system (probabilistic  model) is used to judge a intricate set of words for its meaning while the unsystematic algorithm (Chomsky’s theory) is used to judge how well a sentence has followed a set of rules made by a language.

 

 

 

 

 

Response to Information

There are several interesting idea discussed in these chapters. As Jose has mentioned about the “inclusionist” and “deletionist” I think it’s better if I touch on the fear of the overload of information mentioned in the next chapter. “New news everyday” bases on its previous chapter, which talks about the birth of wikipedia, to give us an idea of an age with the accumulation of information.

It all started with e-mail, a simple and convenient tool that in much favor of everyone. The sender and receiver are not bounded by time and space anymore. However, they soon began to find out the threat of information overload, too much control was given to the senders while little to none to the receivers: “People get too many messages, which they do not have time to read. This also means that the really important messages are really difficult to find in a large flow of less important messages”. In fact, with the available of internet and ebook (Kindle),  People found the problem more severe, but the nature of studying, the process of of learning about the world of human, since they were born, was to receive and proceed them. As according to the text, yesterday ‘s newspaper would take up space that needed by today’s work, and we will have to empty our memory for the newer news, “forgetting used to be a failing, a wasted, a sign of senility. Now it takes effort. It may be as important as remembering”.

I mention about these problems the society has faced in the past because it shows a link to the question we discussed last class, the necessity of the “app using OS” like iphone compare to the “multi-layer folder OS” like Windows.  The “multi-layer folder OS” came better due to filter and search-the solution for information overload.  It seems clear for us these days the importance of these 2 engines, filter gives us the authority to bypass unconcerned information and concentrate on the wanted. When information, which was once precious, becomes cheap, our attention gets more expensive. We become selective in receiving, not just news, but almost anything we intake. The other engine, search, shows the contrast between Windows and Iphone lay out.  If a file is a book and a our laptop/iphone is a library, we can imagine that the book, if it is stored in the laptop, will be put under multiple-layers of folder, categorized by its author, language, genres, etc. and the reader have to remember that path in order to find out the book for the next time he wants it; while if it is stored in the iphone,  it will ,literally, just stay there, right in the screen. Now imagine, if we have thousands, millions of books, what will happen with the 2 OS?

In my opinion, the way we store our information is like how we used Windows to store the files. We set layers and layers of description and used search engine to find them base on those identities. However, once the information is stored, we will not see it again, until the next time we need to see the information. And if someday, people, the only ones who know about that piece of information, forget about them, will we consider that information exist? The answer is no. Many unknown books are stored in the library, and it will still stay there, until the next time people request to find/borrow them. But if there is nobody know about those books, how can they borrow them? This problem, then, is directed back to the fight between the worthiness of being remembered/treasured of information between the “inclusionist” and “deletionist”. It is the conundrum that is not easy to solve, at least until we come up with other algorithm to solve the conundrum of information overload.

 

Response to Information, After the flood

Reading about Wikipedia and the longstanding divide between the “inclusionist” and “deletionist” factions of facebook, I began to think about the role of big data not only in documenting the world, but in creating it.

Some background ; the original conflict was between what concepts were important enough to merit their own Wikipedia page, some believed that Wikipedia should be a place to gather best of mankind’s knowledge to make it available to the world. Other’s believed that Wikipedia should make no judgement over its content, and seek to reflect everything that exists. The example given in the book was a south African restaurant that was deemed worthy of having a Wikipedia page over much opposition.

Now, the noteworthiness of one restaurant doesn’t seem like that much of a big deal, but when big data constructs such as Wikipedia and Facebook play such a large part in life, at what point does Big Data stop reflecting life and life start reflecting Big Data? Most of us are on some form of social media, not only that but our events, and relationships are too, people talk about being “Facebook official” organize their social circles online. In a way, it has become so something is only real if it is confirmed online.

This effect is also extending to a certain degree to business as well, with new businesses living and dying by their online presence. Angie’s list and Yelp are the new yellow pages, and if a company does not appear on them, they might as well not exist. Search engines understand this and have begun to charge extra for higher spots on search results. Some business listings engage in more brazen profiteering, charging companies to remove bad reviews from their online pages. In this way, big data has begun to affect reality instead of the other way around. In essence, “So it is written, so it shall be.”

Weaving the Web

Its always fascinating to read about the origins of technologies we use every day, and when its a technology as ubiquitous as the world wide web, this is even more so. What  made this reading interesting to me is that the development of the internet was not a linear Research and Development project. The internet, from what I gathered in the writing grew organically from its beginnings in CERN and DARPANET. No one company government, or developer is responsible for it.

It was interesting how much of the writing is similar to a philosophical work.  I think that part of this comes from the way the fact that the origins of hypertext were basically an experiment to try to change the way that information and its connections were thought about. The author spoke of abstract connections between ideas and the quest to bring information to the world. These abstract ideas became the internet and the world wide web.

One point that I particularly found interesting was the differentiation made between internal and external links. While it seems that the decision to organize hypertext with two different types of links was made to save space, it had a profound impact on the future development of the web.  By making some links one way, a hierarchy within data was created, changing the way that the information is read and organized. Although it seems like something minor, the way that data is organized affects changes its impact on the world.

Response to Weaving the Web

The reading on the process of coming up with and implementing the World Wide Web was interesting in that it raised many different questions. Previously, I had not known the difference between the World Wide Web and the Internet, so the first few chapters from this book were very enlightening to me. Also, the site of the production of the Web interested me a lot. In the past, I had only truly known about CERN from the novel Angels and Demons by Dan Brown. In the novel, CERN is the site from which a canister that contains antimatter is stolen, and CERN is described as a place with incredible scientific minds. However, CERN is the European Organization for Nuclear Research and is never mentioned in the novel to be the birthplace of the Web, so the fact that Berners-Lee invented the Web there really surprised me. It seemed as though he too believed CERN was not the most ideal place for him to do his work. For example, when writing a proposal to create the Web, Berners-Lee had trouble convincing CERN to allow him to do so. On page 32, he states “Another reason for the lackluster response was that CERN was a physics lab,” which further emphasizes how strange it was that such an important computing technology came out of such a place.

As I mentioned above, before this reading I had not known the difference between the World Wide Web and the Internet. I believe this is because the two are often used almost interchangeably in everyday language, although this is incorrect. This was confusing when first reading the chapters, because I believed the two words to be interchangeable, and thus believed Berners-Lee was incorrect in claiming he invented the World Wide Web. Yet as the chapters continued, the distinction between the two became evident. The Internet, created before the Web, connects different computers together and uses a variety of different computing languages to do so. However, the Web is only one part of the Internet and uses HTTP for hypertext. This made me a little bit confused. If the Web is only part of the Internet, what other parts exist? Are emails separate from the Web? And what about Instant Messaging? Are these separate from the World Wide Web, or have they become part of the Web as time has gone on?

Another detail that made me have questions was the idea of external and internal links. Although I understood the internal link, which could go in either direction, I had questions about the external links. If there were an external link that only went in one direction, would there be a way to go back to a previous link? (Is there a “back” arrow?). These different types of links were created so that external links could prevent information overload, while internal links could appear in both nodes, which may cause files to “get out of step.” Yet the distinction between the two and the need for both was confusing to me.

Weaving the Web Response

A point raised in our last discussion was the advantages and disadvantages of true understanding of how the computer works It was asserted that there are many electronics that we use and have absolutely no idea how they function, and this article reveals how true this is in terms of the internet. Prior to reading I had no idea how anything on the Internet worked, or even what “http” or “URI” or any of the acronyms associated meant. I think this ignorance is especially true for our current generation since the Internet was just something that was always there, something that we grew up with. We never question where it came from or how it came to be, but rather have immediately starting interaction. Reading this piece made me realize how ignorant I am. It occurred to me that I don’t even know how things are saved on websites.  I would think that the site itself has some sever that things are saved to, but is there some cloud that all things are saved to? Also who would have access to said cloud, and is privacy of the Internet even possible then?

Another interesting part of the article was how the Internet wasn’t really popular at first, and Tim Berner’s Lee really needed to push for it. It wasn’t even an official CERN funded project; Lee worked on it in his spare time. He needed to go from source to source in order to find the right funding and help for his project. The Internet today is ubiquitous and probably the most popular form of media. It is interesting that those in the past didn’t see the potential the Internet had. Lee also mentions the Memex, and it seems that he took that idea and really ran with it. He calls Lee, Engelbart, and Nelson ahead of their times, which is ironic because it seems that Lee was ahead of his time too. Those around him, much like the contemporaries of the men he mentioned, didn’t see the potential and if they did couldn’t really help. Lee and the internet were the synergistic combination of all of his predecessor’s efforts, and a real testament to the type of knowledge sharing that these men wanted.

In one of my other classes, we watched the movie Dr. Strangelove, a movie in which the nuclear arms race is satirically discussed. In the movie, the Doomsday Device (a device that would essentially end life on earth as we know it) can only be activated by a computer, and would go off in the event that a human tried to disarm it. The movie explained this as trying to take the human element out of nuclear war. It makes me think though how quickly society has become intertwined with computers and the Internet. The whole Y2K scare was almost 15 years ago, and the thought of computers not working into the new millennium was unthinkable. Now think 15 years later how interdependent our lives have become and how devastating a loss of Internet would be. This article was written in 1999, which shows how relatively knew the Internet is. While I am not asserting that the Internet is inherently bad in any way shape or form, it is important to discuss the ramifications of our Internet dependence.