Monthly Archives: November 2014

Response to Lau’s Kitchen and Social History of Database Populism

The Lau’s piece mention about technologies that has been successfully developed , if not much better, by many electronics companies nowadays. However, again, like Bush’s, if we consider about the time this article was written,1975, it is amazing how the author foreseen on some innovative ways of utilizing the technology at that time. However, the fact that we only have recently achieved such success, after almost 30 years since the distribution of the idea, shows how hard it is to turn even a well-planned invention into reality.

About the social history piece, I think it is a bit exaggerated when talking about the importance of “data literate… or their lives will be destroyed by those who are”. However, I don’t deny that with this rate, data literate and algorithm will become more and more crucial in our lives. Since the start of the computer age, when the first computer was invented, they said there will only about 3 computers needed in each country. Yet less than a century later, everything turns out to be nothing like what have been predicted. People learn how to work with computer,its software;and more and more people now how to code, assemble and design their own computer with separate . Computer skills like knowing how to use Excel or PowerPoint become a must for most of the jobs’ requirement. Therefore, it just a matter of time when people become familiar with the data literate and its algorithm and that time will not be too long.  Just like the article mentions about the similarity between computer and car, I am sure not knowing how to fix your car won’t get your life screwed up by the car engineer; and really, we don’t need to or partially need to know how the car engine works in order to drive a car or even, become a racer.

One more example is, because the technologies keep changing, it is not feasible in timely manner to let people get to learn about the data literate and algorithm. In the last century, it has been a back and forth fight between (SQL and the relational model) vs. (mass-scale information processing/NoSQL), just like the fight between a electric car vs. gas-engine car, which will confuse even the professionals and thus lead to a mass chaos in the computer using community. In order to get the data literate as the author wanted, I think firstly what we have to do is to reach the stable phase where we can be sure that our knowledge has reach its maximum potential, or at least will stay that way in a sufficient time, enough for the society to catch up and be familiar with the literate and algorithm itself.

 

On Database Populism, Data Literacy

Data Literacy versus Algorithm Literacy (6)

Against the claim of Elective-C, I assert that prioritizing literacy in data over literacy of algorithms would would merely engender an inchoate prudence -which merely would be the delusion of retaining a profound and vigilant perspective on digital exploitations.

The abundance of memory is an evident premise in our contemporaneity. Therein, data is apt to be stored in protean and highly inefficient forms (hence is the big data, as we had discussed in the first several classes). These attributes of data storage forms are in severe contrast with these of early computation wherein, rather than the processing speed, space was the limiting factor. The stored data, hence, was in precomputed thus efficient forms that retained the traces of the algorithmic processing. In early data forms, literacy in data would be apt, for such literacy would also comprise discerning the algorithm hence the ends and the priorities that the data structure serves.

In our day, data forms no longer necessarily retain the vestiges of precomputation hence the algorithms. Moreover, with the increasing demand for the coarse data (per its versatility), traces of precomputation can be conceived to have extinct. Therefore, a discrete literacy of algorithms is now imperative, because the speculations on handling of data is no longer evident on the data itself.

 

____

Further explanation on how algorithmic traces evanesced from databases:

The extinction of algorthmic traces is mostly about the evolution of programming paradigms rather than the evolution of data retention forms.

To clarify this evolution, on earlier years, programming was not yet utilized for extracting social patterns or construction complex relational observations and suggestions (as, for instance, Google does). Therefore, the uses of algorithms consisted solely of the fundamental algorithmic operations (such as sorting, altering and conveying data); these types of fundamental algorithms were ultimately in strong correspondence with the data form that was elected for use. Hence, scrutiny of the database -data literacy- would reveal the algorithm itself.*

The contemporary programming mostly revolves around the paradigm, object oriented programming (OOP). The utilized database structures in OOP is mostly in parity with these utilized in before, hence -as a corollary- the databases do no longer display the pertaining algorithms’ characteristics. This is due to two reasons: (1) OOP mostly consists of encapsulating and exchanging code so that a programmer is as the end user of another programmer’s code, and (2) OOP enables greatly high level programming wherein code may seem to be as an idiosyncratic discourse with the computer.

(1) Encapsulation is a primary thus rampantly utilized characteristic of OOP, and due to encapsulation, programmers generally do not have access to deeper levels of code. The deeper levels handle construction of data through generic, fundamental database forms (as these of preliminary practices), and programmers merely write codes to access these data structures.

This aspect also further emphasizes how data literacy would be incomplete without algorithmic literacy. In our time, data and algorithms are further segregated -the definition of algorithmic identity is further refuted. Therefore, an individual must be perspicacious of how the code is executing under the generically formed data bases.

(2) With OOP is the paradigm of highest-level languages, which denotes code that grandly imitates human language. Moreover, this imitation is not as in COBOL; with OOP, programmers get to define their own concepts, teach -with code- how these concepts are handled, and then code using these concepts.** Due to proliferation of these arbitrary concepts, establishment of a standard database, tailored specifically to serve a large set of algorithms, is impractical.

This aspect, too, emphasizes the imperative of algorithmic literacy. Algorithms no longer consist of solely mathematical steps -they comprise also steps that are defined in accord with the whims of the programmers. The ability to distinguish when the mathematical process is interrupted by the whimsical code is the quintessence of establishing true prudence to be vigilant against digital exploitation.

___

* Theoretical example: A client, in much earlier times, has been using a database that was capable of swiftly retrieving maximum number of several attributes from a large set of the recorded individuals. The client has also noticed that this attribute can be increased or decreased -again with great speed- for a set of consecutive records. With data literacy, this client can fathom that the used data format is a segment tree. Since segment tree is largely inefficient for addition of new data, the client may then deduce that this set of records is not lenient to change and hence that this set of records may actually be constructed for the purpose of a sociological investigation of a pre-determined cohort size.

** Theoretical example: As a programmer, I can define an object of a class named Human. I can then define methods (concepts) named isANuisance() and delete(). After these definitions are done, I can just tell the code to: foreach(human in Humans) if(human.isANuisance()) human.delete(). With this code, I would then be conducting an esoteric, whimsical discourse with the computer; no one else knows what I mean by “isANuisance”.

A Social History Response

The article for this week touched upon the same idea that the previous “Computer Boys” article touched on: Who has the power? With the advent of the computer, a whole new world of possibilities was created as well as a whole new sect of people who could build and program these devices. The struggle was to decide how important and pivotal these men and women would be and how often the computer would be used. Due to computer’s ability to store data, it was quickly seen that the invention was not only here to stay but would quickly become pervasive. The concept of ‘big data’ was therefore conceived, as companies could now store and analyze data like never before then. From this data came this question of to what extent data can be restricted and if this is an invention that is more beneficial than detrimental.

The part about Twitter and the Occupy Wall Street issue was especially interesting as both of these ideas are extremely contemporary concepts and inventions adding a degree of relevancy. I personally have a Twitter and have seen the trending topics on the side of the screen. From my experience these have been fairly accurate to the world around, as they are analogous to pop culture. It is interesting to think that perhaps the Occupy Movement was censored and more importantly if Twitter has the right to do that. I would think not since Twitter is a social media app for the people. What is the point of having a trending section if it isn’t even accurate? Twitter can tweet their own views instead of trying to portray ideas by restricting what can be seen. It is possible and probably more likely that it was just a mistake, or perhaps the movement wasn’t as pervasive as those running it thought. It does raise the question though how can we check, and are we then at a disadvantage for not understanding how these systems work?

Due to modern transportation and communication the world is becoming a much smaller place. As a result, knowledge of popular languages is of paramount concern. Interestingly enough, this article brings to light the question is computer languages should be on the radar as well. Computers are undoubtedly ubiquitous at this point, and it is valid to argue that most users are ignorant to how exactly they work. That brings to mind Elective C’s point: “…in the future everyone must be data literate… or their lives will be destroyed by those who are.” To be competent in the current world that we live in, do we need to be aware how to code, how algorithms work and how databases work? Will one day we be at the mercy of those who can? While a dramatic question, it is important to consider the ramifications. Perhaps there will be a day in which high paying jobs require a background in coding. In any event, the article brings up key questions about data literation and touches upon the larger question of the power of data.

Response Computer Boys

It was very interesting to read about the modern struggle between management and the computer workforce from a historical perspective. It seems like such a contemporary issue since every once in a while we hear about a tech company reorganizing to either give more creative freedom to its programmers or management authority to its executives. I thought is was funny how some of the stereotypes about programmers portraying them as antisocial, unmanageable, or rebellious, were around during this era and probably originated around this time.

The biggest takeaway I got from this reading is that the challenge of managing and organizing creative high technology workers; be they programmers, analysts, or designers, is one that has existed since the inception of the computer industry and is likely to persist well into the future. The problem isn’t streamlining or regulating a skill or any one particular skill set, it’s that every time a lower level creative task is made methodical, it creates a new creative task: the organizing and streamlining of the lower level task. This is a recursive problem, every attempt to make creative programming more management-friendly just makes the whole system more complicated. It seems that the best that executives can do is to try to strike  balance between efficiency and effectiveness and find the right level of oversight for their particular companies/departments.

Computer Boys Response

On page 146 of “The Computer Boys”, Ensmenger alludes to “the new theocracy” as a pejorative term for computer personnel.  This analogy seems much richer and deeper than it is addressed in the article.  The parallels are numerous between the power structure of a late-1960s tech corporation and almost any premodern society with strong religious structure.

The very earliest societies – ancient China and Egypt, for example – got around the problem of religious and political powers by declaring their emperor or pharaoh to be a living god.  He thus became the ultimate head of both power structures by divine right, and exercised direct control over both of them.  However, in societies where the king was not considered a god – or the CEO didn’t know the first thing about computers – a power imbalance is created.  In post-medieval Europe, although the king did rule by divine right, he was definitely not divine himself.  There was a separation of powers, between the Church and the State, but the separation was incomplete, and each side struggled for more control over the fate of nations.  The clergy had specialized skills and knowledge that lay people could not possess, much like the knowledge imparted during extensive technical education to a computer programmer.

Since they had power over men’s souls, which was the most important part of humanity, they saw themselves as the true power in the world.  The kings, however, had political power based on station and bureaucracy, much like the managing structure of a corporation, and saw themselves as in control.  There are several major flaws with this analogy; while in the real life example power shifted back and forth between the sides, corporate management always maintained a tight hold over the other source of power in their structure.  Also, the power locus in a corporation doesn’t affect the lives of nearly as many people as that of a nation.  Despite its problems, the comparison is far more interesting than Ensmenger gives it credit for.

On Computer Boys

Stereotype Construction (149)

The stereotypes constructed for winnowing of adept programmers are in much parity with the contemporary stereotypes. Therein, I surmise that the commercial dissemination of these winnowing stereotypes were the incunabula of all further preconceptions on programmers.

My inquiry: Why were these stereotypes established?

Programming can be conceived as an intellectual occupation that endeavors for empirical output. As the precedent intellectual occupations have been -as an elite sect- devising abstract conjectures on the general community of human, programming was received as a perversion (per the empirical, tangible output). Such reception of programming was aggravated by two, in-tandem aspects: (1) the empirical output, yield, purveyed potency over the abstract yield of the precedents and (2) per this tangible yield, the act of programming was integrated swiftly into economics (aptly explored throughout the excerpt).

Interim for clarifying on the potency of the empirical yield:

This claim is based on my previous responses (initially on Foucault) wherein I stated that human mind inherently seeks to achieve a solipsist state. In accord, in utter (impossible) solipsist state, the empirical information would completely acquiesce to the devisings of the mind. Programming is an occupation that approached this state through computers that were, with the intellectual input of code, able to interact with the empirical world. Therefore, this interaction with the empirical world bestowed an extent of power to programmers within the scope of the shared, solipsist motive.

Conclusively, the programmers were an unforeseen potency against other intellectual occupations hence had to be debilitated. The means for debilitation were the stereotypes.

Stereotypes debilitated through ascribing impotency in social level to the programmers. As stated within the article, the stereotypes were mainly revolving around how the programmers were unable to interact with the community -with human. Since social interactions and garnering of a community -as I have stated in the class- is another means of power for human, the stereotypes were imposing impotency in social level to overshadow the potency of empirical yield.

Dissemination of these stereotypes was lucrative per these intricate dynamics of power (potency) situated within. People adopted conceptions, these stereotypes, that would provide means to deprecate other individuals hence emphasize self’s power -even if by a miniscule extent.

Assault on Privacy

The main point of the article was to discuss the consequences and threats that the new age of computing might bring forth. The article warned against a “Dossier Society,” in which the government or big institutions has information on every individual in the country. The author mentions that companies feel like they need all the information they can get to be most effective in their marketing or research. Prior to the advent of the computer, it was impossible to amass and study large amounts of data, yet with the computer the age of “big data” was created. The novel 1984 was mentioned as well, as an allusion that readers during this time period would understand. The novel speaks about a dystopian society in which the government is always watching and monitoring its people and any hint of subservience would result in punishment. The article seemingly connotes that this dystopian concept was now much more realistic.

It is most interesting that this article takes place before the internet became popular. I wonder what Miller would think now that certain websites ask for credit card numbers, bank numbers, social security and other extremely personal and private information. Computers are much more intimate and integral to society now then they were before, and it is much more important to protect privacy now. Whole identities can be stolen through computers now and users must be much more vigilant. The article mostly talked about protection in terms of big institutions that ask for data, and how they must protect the data that they ask for. Now, personal computers are at risk as well and will usually have less protection than big companies.

The article stressed possible ways of protecting data and brought to light the question of if people will use privacy methods. Miller lists all these possible methods of protecting data, but notes that many companies will not take the time or money to institute these changes. He also touches on the fact that there are very few laws for the internet and computers, especially at this time. Computers were not invented with the intent to accomplish all they can do, and therefore it was hard to predict how to stop crimes with computers. Now, hacking and task forces against hacking are a huge network and thriving industry due to our dependency on computers. Miller is ahead of his time as he asks the question to the extent of data that companies should be allowed to ascertain, the liability of companies that hold this data, and government intervention in protecting this data. All these issues are still very relevant and pressing today.