Data Literacy versus Algorithm Literacy (6)
Against the claim of Elective-C, I assert that prioritizing literacy in data over literacy of algorithms would would merely engender an inchoate prudence -which merely would be the delusion of retaining a profound and vigilant perspective on digital exploitations.
The abundance of memory is an evident premise in our contemporaneity. Therein, data is apt to be stored in protean and highly inefficient forms (hence is the big data, as we had discussed in the first several classes). These attributes of data storage forms are in severe contrast with these of early computation wherein, rather than the processing speed, space was the limiting factor. The stored data, hence, was in precomputed thus efficient forms that retained the traces of the algorithmic processing. In early data forms, literacy in data would be apt, for such literacy would also comprise discerning the algorithm hence the ends and the priorities that the data structure serves.
In our day, data forms no longer necessarily retain the vestiges of precomputation hence the algorithms. Moreover, with the increasing demand for the coarse data (per its versatility), traces of precomputation can be conceived to have extinct. Therefore, a discrete literacy of algorithms is now imperative, because the speculations on handling of data is no longer evident on the data itself.
Further explanation on how algorithmic traces evanesced from databases:
The extinction of algorthmic traces is mostly about the evolution of programming paradigms rather than the evolution of data retention forms.
To clarify this evolution, on earlier years, programming was not yet utilized for extracting social patterns or construction complex relational observations and suggestions (as, for instance, Google does). Therefore, the uses of algorithms consisted solely of the fundamental algorithmic operations (such as sorting, altering and conveying data); these types of fundamental algorithms were ultimately in strong correspondence with the data form that was elected for use. Hence, scrutiny of the database -data literacy- would reveal the algorithm itself.*
The contemporary programming mostly revolves around the paradigm, object oriented programming (OOP). The utilized database structures in OOP is mostly in parity with these utilized in before, hence -as a corollary- the databases do no longer display the pertaining algorithms’ characteristics. This is due to two reasons: (1) OOP mostly consists of encapsulating and exchanging code so that a programmer is as the end user of another programmer’s code, and (2) OOP enables greatly high level programming wherein code may seem to be as an idiosyncratic discourse with the computer.
(1) Encapsulation is a primary thus rampantly utilized characteristic of OOP, and due to encapsulation, programmers generally do not have access to deeper levels of code. The deeper levels handle construction of data through generic, fundamental database forms (as these of preliminary practices), and programmers merely write codes to access these data structures.
This aspect also further emphasizes how data literacy would be incomplete without algorithmic literacy. In our time, data and algorithms are further segregated -the definition of algorithmic identity is further refuted. Therefore, an individual must be perspicacious of how the code is executing under the generically formed data bases.
(2) With OOP is the paradigm of highest-level languages, which denotes code that grandly imitates human language. Moreover, this imitation is not as in COBOL; with OOP, programmers get to define their own concepts, teach -with code- how these concepts are handled, and then code using these concepts.** Due to proliferation of these arbitrary concepts, establishment of a standard database, tailored specifically to serve a large set of algorithms, is impractical.
This aspect, too, emphasizes the imperative of algorithmic literacy. Algorithms no longer consist of solely mathematical steps -they comprise also steps that are defined in accord with the whims of the programmers. The ability to distinguish when the mathematical process is interrupted by the whimsical code is the quintessence of establishing true prudence to be vigilant against digital exploitation.
* Theoretical example: A client, in much earlier times, has been using a database that was capable of swiftly retrieving maximum number of several attributes from a large set of the recorded individuals. The client has also noticed that this attribute can be increased or decreased -again with great speed- for a set of consecutive records. With data literacy, this client can fathom that the used data format is a segment tree. Since segment tree is largely inefficient for addition of new data, the client may then deduce that this set of records is not lenient to change and hence that this set of records may actually be constructed for the purpose of a sociological investigation of a pre-determined cohort size.
** Theoretical example: As a programmer, I can define an object of a class named Human. I can then define methods (concepts) named isANuisance() and delete(). After these definitions are done, I can just tell the code to: foreach(human in Humans) if(human.isANuisance()) human.delete(). With this code, I would then be conducting an esoteric, whimsical discourse with the computer; no one else knows what I mean by “isANuisance”.