Skip to main content

Shopper or shoplifter: Origins of the browser part 1

(Image credit: Image Credit: 377053 / Pixabay)

Shortly after the earth cooled and life began, Sir Tim Berners-Lee invented the World Wide Web. This was approximately the year 1990. He also built the first browser, confusingly and yet inspirationally called WorldWideWeb.

Since then things have evolved. The browser, which began as a simple visual interpretation of a markup language used to create a textual representation of visual elements, is now a dangerously functional runtime environment more comparable to our host operating systems (e.g., Linux, Windows, MacOS) than you might expect. Developers may compare the browser to the Java virtual machine (JVM) or the common language runtime (CLR) upon which many newer desktop applications run.

In the ’90s the browser world was very different. There was no Chrome (there was no Google for most of it), or Firefox, or Safari. Early browsers like Lynx and NCSA Mosaic gave way to more commercially developed works like Netscape Navigator.

In 1995, Windows bundled their first version of Internet Explorer into Windows 95. Netscape engaged in an anticompetitive lawsuit that would push them backward (even though they were right). Netscape, however, did succeed in, and should be credited for, spawning two legendary contributions. One of these was the Mozilla Foundation, which brought us Firefox. The other was the original sin of the browser, JavaScript.

JavaScript was originally called Mocha and then LiveScript. Since the language has nothing to do with the Java language, it probably should’ve kept one of the early names. However, thanks to Netscape’s marketers riding Java’s coattails, we’re stuck with JavaScript (a.k.a. JS).

Why is JavaScript the original sin of the browser?

JavaScript changed our browsing experience in a fundamental way. No longer were web pages or websites developed in a static brochure style using only basic hyperlinks (remember the <blink> tag?) to load new content and generate interactivity.

JavaScript allowed web developers to embed programming into the page itself and use it to create, modify, and react to the page. At first it was used for simpler things, like form validation. This made for a better user experience and faster performance because now some of the decision-making was taking place on the client side instead of 100 per cent of the burden being shifted to the server.

Of course, JavaScript would’ve been sincerely limited without its sister initiative, called the DOM (Document Object Model). The DOM is essentially a programmatic map of the web page that is made available to the embedded language. In the early days, both Microsoft and Netscape developed and largely fought over how this should look, until the 2000s, when it started to fall into a standard.

Fast-forward to 2004, when DOM Level 3 was published. Software security guru Gary McGraw released his seminal book Software Security shortly afterward, in 2006. Web pages and websites were now being called web applications, and things were changing fast—faster than we realised.

Browser language now enabled web application developers to embrace the many functions of JavaScript:

  • JavaScript can add, change, and remove all HTML elements and attributes in the page.
  • JavaScript can change all CSS styles in the page.
  • JavaScript can react to all existing events in the page.
  • JavaScript can create new events within the page.

The language can rewrite everything on the page. It has total control and was responsible for turning largely form-based interactions into…well, Facebook—a web application with dynamic content, real-time updates, and interactivity between users.

Let’s examine modern browser power

Consider the change in application development from desktop to web-based. There are key differences in the trust model between the two contexts. In the legacy model, where we downloaded and ran applications, we trusted the organisations that created those applications (e.g., Microsoft), and therefore the products themselves. Additionally, there was no shortage of virus-checking software solutions, like McAfee and Comodo, ready to jump on anything that looked off.

Through the trust model, we’ve been taught to trust something we’ve already used successfully. But what we need to acknowledge now is that each time we return to something like Facebook or Netflix, it’s highly unlikely we’re running the same application in our browser. Take Netflix as a highly advanced example of how development really works today. They are the benchmark for continuous integration and deployment, boasting an approximate 16 minutes between a developer changing the code and it being deployed globally. 

We are heavily reliant on developers making changes we can trust in real time. In the case of major players like Facebook and Netflix, security is paramount. Secure development culture, tools, and processes are all part of the environment. But that isn’t necessarily the same for every organisation, particularly smaller companies without the expertise or knowledge to know what they are missing.

Yet the attack surface is continually changing and growing.

Steve Giguere, lead sales engineer, Synopsys
Image Credit: 377053 / Pixabay

Steve Giguere
Steve Giguere is a lead sales engineer at Synopsys. He works tirelessly to encourage firms to build security into their software development process, ensuring that defects are identified and eradicated early in the SDLC.