domingo, 26 de abril de 2020

7 Lin Clark on WebAssembly

Typically, we use an array of code to create a website. HTML is used for its structure, CSS for styling and Javascript for the website’s behavior. But it seems that there are better options for coding. On “Lin Clark on WebAssembly” by Software Engineering Radio with guest Lin Clark, she explains how WebAssembly gives the programmer more control over the code it will run, because Javascript was not made to be fast (it instead was made to be easy). But, with WebAssembly the code runs more consistently, and Clark gives us an example. She partnered with gaming studios like unreal engine and saw that WebAssembly ran smoother and eliminated the threat of frame drops which were common with Javascript.

WebAssembly then is a compiler that works in tandem with C, C++, or Rust to produce a smoother runtime code in the Javascript VM (virtual machine). It works with modules that provide functions. Although natively, WebAssembly only understands integers and floats. But, its compatibility with all kinds of software makes it worth it. That is because it removes a lot of the work needed to translate the code to work on each kind of operating system. With WebAssembly, the code is compiled and translated to the correct type of code for each instance.

Another interesting feature WebAssembly has is as the binary code downloads in chunks through the web, you can decode (no parsing needed) and compile it (Clark calls it ‘streaming compilation). And with the issue of security, every module is accessing its own memory object and thus cannot be accessed by third parties. If there is an attempt to do so, the module returns an error.

So, it appears that WebAssembly could become an important tool in the next few years for web developers to optimize their work. Which means, less time translating and more time creating and improving performance.

domingo, 19 de abril de 2020

6 Building Server-Side Web Language Processors


Following the thoughts of Ariel Ortiz in my previous post, we know delve into his article “Building Server-Side Web Language Processors”. This article defends the idea of teaching students to learn web language processors because they are more relevant to our current context. In other words, it is better suited for students in this day and age because they will be more prepared to face the problems of the real world via the widely used world wide web.

As stated in the article, we are focused more on the server-side of processing web language and not the client-side. Ortiz writes: “The purpose of a web language is to do some computations and then produce an output in the form of an HTML web page (or XML document, plain text, etc.). This means that a resulting page is built from two types of elements: dynamically generated elements and static content elements” (2) as the purpose of web design. This also helps students get used to using a template view which is web language code embedded in the presentation code. Or in other words, the dynamic elements are embedded in the static elements because of its similarity to the output it produces.

There are also other things to consider while building a web-based language. The HTTP (Hypertext Transfer Protocol) becomes a topic to be understood because it is fundamental to web building. It is the request/response between client and server, which are necessary to view what was coded. And related to this topic is the security issues that come with it.

Since the WWW is publicly used, your code can be found and modified if there are no security measures implemented (which would not happen in a command line shell). Therefore, students become aware of the threat lurking on the web and take steps to prevent those threats.

Ultimately, Ortiz’s idea for a web-based language processing could become a more relevant and enticing endeavor than a shell type approach.

lunes, 13 de abril de 2020

5 Ruby and the Interpreter Pattern


As stated in  “Language Design and Implementation using Ruby and the Interpreter Pattern” by Ariel Ortiz, we notice a different way to approach common code. The way Ruby approaches problems is object oriented and using interpreters, like Python. Which makes it easier to modify and expand. The SIF (S-expression Interpreter Framework) for Ruby determines a type of coding that builds up with an essential foundation: “The core of the SIF is very simple. It only supports integers, symbols, lists, and procedures.” (2)

Ruby is incredibly flexible and develops the programmer’s mind to think in an abstract way. Since there are multiple ways to extend the language, programming becomes synonymous with building. Ortiz mentions a few ways to produce new procedures to tackle problems. The code itself contains a few primitive procedures like arithmetic and it is possible to simply define a new primitive procedure to affect your code. It is also possible to create new classes with the special form function:
“For example, suppose we want to implement the if special form. Its syntax and semantics are as follows: Syntax: (if condition consequent alternative) Semantics: Evaluate condition, if the resulting value is not an empty list, evaluate and return consequent, otherwise evaluate and return alternative.” (3)

As Ortiz mentions, Ruby has the advantage of having its syntax and semantics separate. This means it is doubly important to know how to structure each of them to create a functional code. That said, that is why I believe it to be an excellent learning tool for programmers who want to delve deep into the core of code structure. Although, it is also true that if you do not understand its core procedures your code will not work. And, it develops the good habit of creating from scratch instead of copying and pasting from other code examples commonly found on opensource and alike.

domingo, 29 de marzo de 2020

4 Mother of Compilers


Grace Hopper is known for her contributions to the computing world. Which started at her position in the navy as a ‘computer’ for ballistic tables for WW2 after being a professor of mathematics. She served under Howard Aiken, the architect of the Mark I, which was a slow electro-mechanical computer. This huge calculator was a secret project at Harvard, and Hopper learned the coding aspect of it, which would later position her role as indispensable.

There was another secret project where another computer was stored, called the Eniac. Which was larger and could calculate more than the Mark I. These projects where so enormous that it consumed a major part of Hopper’s time. Even more so, because she and her team under Aiken where given a problem that was later reveled to be the ‘implosion problem on the nuclear bomb’. Which, as we know, was used in Japan to decimate the population.

After the war, Hopper is relieved of her duty in the navy and not allowed to be a professor again. So, she turns to a startup company which had the purpose of transforming the computer to be a household machine. It started properly with the Univac I, and Hopper’s contribution by implementing a compiler in 1951. Then, the basis of Cobol (Computer Business Oriented Language) was introduced to allow non PHD mathematics students to be able to communicate with computers.

I believe this woman’s path traced the journey of our programming world today. Her unwavering confidence to tackle problems in a different way than what was already established opened new lines of thought which constitute our contemporary programming languages. It is important to follow in her footsteps and not get comfortable with how things are now. We must continue to think programming as a tool to for a means, not a means for an end.

*All the information in this blog was referenced from these two sources: Pages 1 and 2 of the 2013 article titled “Grace Hopper – The Mother of Cobol” from the “I Programmer” web site. The video documentary “The Queen Of Code” (16 minutes long), directed by Gillian Jacobs in 2015.

domingo, 22 de marzo de 2020

3 Internals of GCC


In “Internals of GCC” by Software Engineering Radio with Morgan Deters as guest we can see a general view of how GCC compiler works. Morgan Deters says: “A compiler has to read a plain text, source file, and understand what that means […] anytime you use a variable it has to figure out which variable you’re referencing, how to access that variable, whether it’s a local variable or global variable […] and it has to understand the semantic content […] generally, compilers will produce some sort of internal representation to mull over this information, to understand it better themselves” And, the GCC comprehends a vast array of programming languages which can be implemented in a large scale of processors.

The GCC (Gnu Compiler Collection) can be run on various platforms and is very flexible. Its input and output support all kinds of code structure. Therefore, Deters promotes its usage for many different architectures. The compiler passes through various iterations or RTL (register transfer language) to produce a low level syntax tree produced from the source code that will be executed in the target language.

The GCC has a front-end, middle-end and back-end. Which Deters explains in a modular fashion. The front-end is language dependent but architecture independent. The middle-end can operate on anything. And, the back-end is language neutral but architecture specific. And by architecture, Deters means a specific instruction set common in a family of chips (example. Intel’s 32 bit chips). This means the GCC compiler is highly usable because of its easy integration to most types of software and hardware.

Although, not all of it is good news. By optimizing all of these things to a general and simple integration process, some of nodes of the high level tree are basically displaced to give way to a simple syntax which recurses in itself multiple times, and thus, making the code much faster but less efficient. Anyway, this seems like an excellent option for those who need to share their work to multiple platforms so their code can function anywhere.

domingo, 1 de marzo de 2020

2 The Hundred-Year Language

Paul Graham talks about an interesting idea in his essay  “The Hundred-Year Language”. He imagines what programming languages might be in a hundred years from now, and what current languages will disappear because of their ineffective essential axioms. He, in fact, uses an analogy to get his point across: “It's like the rule that in buying a house you should consider location first of all.” Where the house is the language, and the location is the foundation. Because you can improve the house all you want but not change its location.

We can also bring other languages to the table: human languages. Those not man-made. Their growth depends on context. English has become a business type language while Italian, to name one, has been confined to a more ‘artistic’ language. Human languages’ development has evolved in the same way a programming language, I think, will evolve. That is, programming languages will become more efficient by reducing their axioms. Although, other programming languages will not have this same evolution because they will evolve to be efficient and not convenient. But the author remains hopeful: “In language design, we should be consciously seeking out situations where we can trade efficiency for even the smallest increase in convenience.”

“Inefficient software isn't gross. What's gross is a language that makes programmers do needless work. Wasting programmer time is the true inefficiency, not wasting machine time. This will become ever more clear as computers get faster.” This is also a true statement. I believe the world thinks computers can do things by themselves, thus designing inefficient software where programmer’s time is wasted to give way to a faster implementation. And, this type of thinking brings more trouble than solution because it resolves an issue on the surface, but the foundation is ignored, which at the end brings more problems to be corrected. Which would not happen if the essential code were structured properly.

At last, we must rethink our way of designing programming language because what we use today will not be relevant tomorrow. But I can also see the difficulty in doing so because “[…] our ideas about what's possible tend to be so limited by whatever language we think in that easier formulations of programs seem very surprising.” So, it is hard to reduce anything to its core value because we tend to think in a much more superficial level.

All in all, I remain as hopeful as Paul Graham in that technology will not overcome us wholly. There will be programming languages that become irrelevant immediately, but we just need one to work properly so it can evolve at the same pace as future technology.

domingo, 23 de febrero de 2020

1 Making Compiler Design Relevant for Students

After reading the precise thoughts of Saumya Debray displayed in “Making Compiler Design Relevant for Students who will (Most Likely) Never Design a Compiler“, I can honestly say that I view compiler design in a whole new light. As he said, we “[…] typically focus narrowly on the translation of high-level programming languages into low level assembly or machine code” (5), but it does not have to be this way.

Compiler design follows a set of rules that can influence the way we view translator problems. Since the task comprehends a step by step process, the operation suggests a detailed approach lacking in the minds of programmers now a days.

To enlighten my point a bit, the phases a compiler goes through are essential for anyone who is thinking of pursuing a career as a programmer. One must understand these phases to be able to construct an optimized code:
1. Lexical analysis and parsing: which means “[…] examining the input to be translated and dividing it into groups of adjacent characters, called ‘tokens’” (3).
2. Semantic analysis: meaning the type or scope of variables.
3. Code generation: explained more generally “[…] as an instance of the process of translating from a representation of a source language entity to that of a corresponding target language entity” (4).
4. Code optimization: which aims to reduce costs by transforming the output code into one that improves performance (be it in time, size, or energy usage)

So, it seems that compiler design is more than it would appear. Understanding structure develops our way designing code, because it provides us with the essential components of a problem and how to resolve said problem step by step. In conclusion, having this example of design could determine our way of understanding the similarities between the programming languages available to us.

jueves, 13 de febrero de 2020

Hello there.

Me, José Kotásek

First off... I'm facing the legendary Compiler Design class, this is encouraging and frightening at the same time. I believe this class is a perfect wrap-up for my career in a matter that I can actually see how my abstraction skills have developed. Hope they have...

As for my hobbies, I love music... listening, composing or playing it. I try to make every listening session a special moment. Also I like spending time with my friends in a bar drinking water.

Recently I haven’t had much time to see any series, just the -must see- Marvel movies, can’t miss those, can’t wait to have more time to see everything I’ve been missing. Oh and re-watching some shows from my golden years (Smallville and Vampire Diaries, guilty pleasures for sure).

7 Lin Clark on WebAssembly

Typically, we use an array of code to create a website. HTML is used for its structure, CSS for styling and Javascript for the website’s beh...