greyhatsage.com https://greyhatsage.com Every sage has a story... Sun, 10 Mar 2024 20:57:37 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://greyhatsage.com/wp-content/uploads/2024/02/cropped-wizard-gandalf-lego-2021410-32x32.jpg greyhatsage.com https://greyhatsage.com 32 32 Where are my smartwheels already? https://greyhatsage.com/where-are-my-smartwheels-already/ Sun, 10 Mar 2024 20:57:37 +0000 https://greyhatsage.com/?p=832
 

This time I want to do a post more along the lines of what I intend this site to be – that is looking at problems, looking at past solutions and then figuring out if current are adequate or suggest improvements to help break out of “the loop” perse.

So to start, let’s talk about tracked vehicles and things of this nature.


Tractors have been around for a long time.   The first tractors were in fact invented as a solution for a horse shortage at the beginning of World War I where all the horses were being sent off to Europe for calvary uses.  

“Between 1914 and 1918, the US sent almost one million horses overseas, and another 182,000 were taken overseas with American troops. This deployment seriously depleted the country’s equine population. Only 200 returned to the US, and 60,000 were killed outright.” 

Cite: https://web.archive.org/web/20100926051214/http://imh.org/legacy-of-the-horse/the-horse-in-world-war-i-1914-1918/ –  International Museum of the Horse. Archived from the original on 2010-09-26. Retrieved 2010-09-15. via Wikipedia 

Certainly, there were other tractors invented before WW1.  But like many inventions, they often arrive before their time of need.  Once horses became scarce, a need arose and then industry is there to fill a need.
An early bulldozer-like tractor, on crawler tracks, with a leading single wheel – for steering – projecting from the front on an extension to the frame. The large internal combustion engine is in full view, with the cooling radiator prominent at the front. An overall roof is supported by thin rods, and side protection sheeting is rolled up under the edge of the roof.

The Holt 75 model gasoline-powered Caterpillar tractor used early in World War I as an artillery tractor. Later models were produced without the front “tiller wheel”, c. 1914.

 
Ironically, the concept of the “tank” predates this machine by about 400 years through a sketch made by Leonardo Da Vinci.
undefinedhttps://en.wikipedia.org/wiki/Leonardo%27s_fighting_vehicle
 

Here we can see that Holt 75 being used by the French pulling an artillery cannon.

undefined

 

Eventually, during the first world war, there were more recognizable versions of what we know as a tank.

undefined

Looks like we now know where the tank they used in “Indiana Jones and the Last Crusade” came from.

 
So we know where it came from.  How is it being used today?

Well, we have the tank still. 


We also have things like converted vehicles using tracks like:

undefined

The Mars Institute “HMP Okarian” Humvee with Mattracks during the Northwest Passage Drive Expedition (2009-2011).

 
And of course the Sno-Cat (seen here in Antarctica):

 

“Ok, great” you say. 
“Where are you going with this then?” you ask.

Smart wheels.

 
Remember in Snow Crash they discussed spoked smart wheels fairly extensively.  Where are they?
 
The closest analog I know of are Mecanum Wheels or Omni Wheels:
undefined
Mecanum Wheels (above) –  Omni Wheel (below)

And I know of Boston Dynamics use of peg legs to create “dog” like robots that can handle stairs and uneven terrain:



However, in Snow Crash, they described spoked wheels:

Smartwheels use sonar, laser range finding and millimeter wave radar to identify mufflers and other debris. Each one consists of a hub with many tiny spokes. Each spoke telescopes into five sections.

On the end is a squat foot, rubber tread on the bottom, swiveling on a ball joint. As the wheel rolls, the feet plant themselves one at a time, almost glomming into one continuous tire. If you surf over a bump, the spokes contract to roll over it.

If you surf over a pothole, the rubber prongs probe its asphalt depths. Either way, the shock is thereby absorbed, no thuds, smacks, vibrations, or clunks will make their way into the plank or the Converse hightops with which you tread it.

The ad was right – you cannot be a professional road surfer without smartwheels.

In writing this article, I’ve just learned about another wheel from history:

https://en.wikipedia.org/wiki/Pedrail_wheel

Now I have to give mucho credit to Technovelgy for bringing me up to speed:
http://www.technovelgy.com/ct/content.asp?Bnum=117 as he’s been keeping up with the changes in technology in this realm… such as:

 

So yeah, there has been work efforts made in this field.  Obviously there could be more done, but it shows that ideas from the past are still inspiration to future ideas.

 
]]>
Is “Tech Debt” worse than “National Debt”? Yes. https://greyhatsage.com/is-tech-debt-worse-than-national-debt-yes/ Fri, 23 Feb 2024 15:45:18 +0000 https://greyhatsage.com/?p=729

Inspired by:
https://g.co/gemini/share/a41a715cdc94
and
https://xkcd.com/980/

Money

Debt.  It is in each of our lives and frankly, how we built tomorrow today. 

To think there is a way to go through life without ever incurring debt is to live a life that spends more time tending to things personally that could be spent building bigger things – collectively – and then paying those back with the benefits of what you built together along with the satisfaction of helping someone that helped you.

We put off or defer things today to build improvements we wish in our lives, but when it comes time to pay the bill, we begrudgingly wish to pay the bill.  Is it entitlement?  Is it just the idea of trying to cheat reality to get something for nothing?  Physics teaches us in the laws of energy conservation that we cannot get motion without an energy expense.  Why would economics be any different?

 

Then: 30 years ago -
The (Mythical) Man Month

The “Mythical” Man-Month is a book written by Fred Brooks that describes a phenomenon whereupon a logical fallacy that if that if you were to simply add more people to a delayed project, that it would magically bring the project back on schedule.  The idea known as “Brooks’s Law” describes how adding people to a an already late software project makes the software project even later.

The cause is mostly due to “ramp-up” time – or the time that it takes for a person new to the project to learn the project itself, learn the engineers on the project and the decreased productivity of having people available to train the new person along with the learning “mistakes” that the new person may incur upon a project making it later.  While this, inevitably, is how we onboard new hires across every job I’ve ever worked, and an oversimplification noted by the author himself, nevertheless passes the general idea that does stand the test of time.

Another cause is communication channels necessary to keep projects in sync as the number of people assigned to the project are increased.  

Lastly, the author mentions that by adding more people to a highly divisible task, such as cleaning rooms in a hotel, decreases the overall task duration (up to the point where additional workers get in each other’s way). However, other tasks including many specialties in software projects are less divisible; Brooks points out this limited divisibility with another example: while it takes one woman nine months to make one baby, “nine women can’t make a baby in one month”.

 

There are other cases as well where both technical debt and Brooks’s law likely contributed to the projects overall failure:

undefined

The Space Shuttle project was doomed almost from the beginning in design and goals.  NASA originally wanted a purely liquid fueled space craft capable of reuse.  After the Apollo missions were shutdown, there was a drastic change in the country’s funding priorities and ultimately both the DOD and NASA were forced to work together.  There is some debate whether or not philosophical design differences lead to the decision of solid rocket boosters mostly used by DOD in the nuclear ICBM arsenal played a factor in them being incorporated into the Space Shuttle design but it is most apparent that cost cutting incurred more technical debt which never was fully repaid over the programs history leading to tragic losses of life. 

https://en.wikipedia.org/wiki/Space_Shuttle_design_process


“The wrong car at the wrong time.”

“The aim was correct, but the target had moved.”

The Ford Edsel was plagued by compromised decisions in design, management decisions that were to “streamline” the models across various makes.  These changes caused the production line to suffer overall as there wasn’t a dedicated Edsel production line causing existing production lines to have to retool and re-bin parts every time Edsel were produced in the existing factories.  The end result was a product nobody wanted due to lack of a real feedback loop from the customer to the designers and management.

https://en.wikipedia.org/wiki/Edsel#Edsel’s_failure


undefined

The Panama Canal Development was the plan to join 51 miles of waterways to connect the Atlantic ocean to the Pacific.  Actual planning began over 150 years prior to the actual opening of the canal that occurred within a 10 year project span.  This comparatively short time span (which was in fact ahead of schedule by two years) was assisted with a prior French effort happening from 1881-1899 that ended in dismal failure due to disease and brutal underestimation of the needed efforts.

https://en.wikipedia.org/wiki/Panama_Canal#French_construction_attempts,_1881%E2%80%931899

 

Brooks also discusses several other concepts and challenges in software development, including:

  1. Conceptual Integrity: The importance of maintaining a consistent design and architecture throughout the development process to ensure the overall quality and coherence of the final product.

  2. The Tar Pit: The complexity of software development, likened to a tar pit where projects can easily become mired in difficulties and delays.

  3. Surgical Team Model: The idea that small, skilled teams are often more effective than larger, less cohesive ones, particularly in complex projects.

  4. The Second-System Effect: The tendency for engineers to overdesign their second system, leading to unnecessary complexity and delays.

  5. No Silver Bullet: Brooks argues that there is no single, revolutionary innovation or approach that can magically solve all the inherent difficulties and complexities of software development.

But technical debt is a silent killer of projects and budgets.  It leads to increased effort and cost on work already “completed”.  Probably one of the biggest examples is the “Year 2000” problem where data storage allocations only accounted for 100 years, from 1900 to 1999 and not 1000 or 10000 years.  Considering modern computing only became common place for maybe 50 of those years, this is an incredible oversight. 

Geek Squad Agents reflect on 20th anniversary of Y2K - Best ...

Even to this day, there are systems in operation built and still operating from this era that obviously went through this “technical debt” maintenance operation that have not changed and are still operated as critical operations for many major corporations and government entities. 

Migration to more modern hardware and virtual platforms is only a partial solution as the reason these systems existed for so long is due to having deep assessment and accounting of their operations over a long period of time. 

Any architectural change to this platform has the potential of introducing unexpected changes to the application being supported. And as many of these systems are financial in nature, could directly cost the organization who runs them substantial costs.  These applications are crucial across many different industries.  From accounting of medical costs to insurance or Medicare, to payroll and unemployment benefits to a state’s rolls, and many in between from aerospace to agriculture. 

In the end, these are systems which will take a long time and big budget to properly engineer and develop systems to eventually replace them.  A shortcut solution will not be applicable here without extensive testing and signoff.

Don’t forget there is a datetime issue coming in 2038 relating to 32bit addressing.  So tick-tock…
File:Year 2038 problem.gif
Obligatory XKCD on the topic…

It's taken me 20 years, but I've finally finished rebuilding all my software to use 33-bit signed ints.

 

Now: Infrastructure as Code (IaC), Virtualization and Containerization

Today, data now lives in “clouds”.  Well, not in fluffy transitional forms of water vapor but in large amorphous buildings known as datacenters.  We like to joke that companies did not move their computing resources to a cloud, but to someone else’s datacenter.

The Cloud Isn't “Just Someone Elses's Computer” – tinfoilcipher

But it took more than that to get here.  The first is to transition from servers that were built and cared for as a singular project to something that scales.  We call this mindset as treating them as “pets” and often engineers would treat them like pets, going as far as giving them individual names.  The problem with this mindset is that we then begin caring for them as such.  This includes patching and systems maintenance.  Projects built on them would also become myopic in their scope in that applications would then be built around this “family” infrastructure which then become inevitable thinking of continual improvement – while not bad in and of itself – made the whole application being run in this “pack” of server hardware, a one-off.

File:Computer server rack.jpg

Duplicating a one-off can be difficult in systems engineering parlance.  While installing the operating system and base system components usually is fairly set, where it become unique is the software configuration for the application, the network and storage requirements, cabling and hardware resources for it to scale horizontally to compensate for when computing resources push the hardware to its vertical limits of the platform. 

The biggest problem: getting it exact every time you add a new server or propagating changes across multiple “duplicate” hosts.

What's driving higher rack densities in the data center?

This is the point where we began to treat systems and the entire infrastructure as software code (Infrastructure as Code – IaC).  The premise here is that there would be a single source of “truth” which is usually the scripting and software used to manage the servers and they would interrogate the servers as to their “state” information – including patching, software configuration, right down to their kernel and network configurations.

The effect of which is that we can now completely wipe out a complete server or replace it and rebuilt it to be configured exactly the way we wish it to become at the press of a button. 

Servers are no longer “pets” at this point. 

They become what we call “cattle” and lends to the idea of “idempotence” where a server becomes the result of software code instructions and not the culmination of continual tuning and maintenance on individual systems.

Improvements are made in software versions, iterated instructions and propagated through management software to all applicable servers in machine coded fashion. 

Deus ex machina.


Now that we have scriptable software installing the operating system and the applications to be run on them (along with their configuration components), we now turn our attention to increasing our computing density.

When the Apache web server was released, it eventually offered a feature that is part of the HTTP exchange representing that the request is asking for xyz web server.  This is partly for SSL and HTTPS reasons, but more importantly its because a single webserver can host multiple websites on a single server and part of the HTTP exchange is reporting this to the Apache webserver which in turn goes to the correct configuration in its software and serves out the correct set of files from its storage back to the requesting client.

Virtual Hosts là gì? Hướng dẫn quản lý nhiều trang web với Virtual  HostsZ.com Cloud VPS – Tốc Độ Cao Khởi Tạo Trong 1 Phút

But this is just for webservers – how can you do the same for whole computers where you may have some servers that get traffic occasionally and others who may have loads of traffic.  Both cost relatively the same, both use the same resources and energy.  But, can you take advantage of the unused resources of hosts to serve out more applications?  More to the point, can you make a machine within a machine?

At this point we are beginning to get closer to the “Second-System” anti-pattern but I will address this in just a moment.


XZIBIT” Explains Virtualization : Global Nerdy

Virtualization allows for computing resources to emulate in software a whole virtual computer at a virtual hardware level. 

There are good reasons for this – Namingly for domain separation in that you can run completely different operating systems at the same time on the same physical hardware through a hypervisor and hardware abstractions layers where a disk operation is segmented to lower layers with specific scopes to a given file and back out again to the virtual operating system.

But there is a performance hit in terms of overhead due to virtualization and the “second-system” paravirtualization efforts whereupon functions are translated twice – once within the virtual machine to talk to its virtualized hardware driver to talk to the lower level abstractions of the virtual machine, and again between the virtual machine to the host operating system that actually performs the operation in proxy for the virtual machine.

Virtualization in Cloud Computing and Types - GeeksforGeeks


Then comes “containerization” which originally came about as a means of “virtualization” without the hypervisor (a hypervisor being the software layer that does the abstractions between virtual and real hardware). 

Hypervisor-based vs Container-based Virtualization | Download Scientific  Diagram

This came about with the thinking that if you have binaries that can be accessible inside of a “jailed” user-level space, you could technically have an entire other computer system operating on top of the real system that was completely “virtual” in that it could access binaries in a “passthrough” scenario from the host system with the results remaining inside “jailed” space that doesn’t impact the host operations or its configurations.


Docker or containerd are the predominant “containerization” middleware today.

Where I think things have gone fully into “second-system” is Kubernetes which has taken hold in many companies as their panacea to manage numerous containerized “virtual machines”.

Kubernetes Components | Kubernetes

Often these containers contain strip down specific versions of libraries and binaries – one of the strengths of containerization is being able to abstract the code from the system operations abstracts – therefore the ability to snapshot a particular configuration. 

Containerization with Kubernetes - RV Global Solutions Inc

This is not the problem in my view – it’s the management and utilization of this toolset that becomes a requirement for people to learn as the biggest problem when being on the cutting edge is often plagued with ideas trying to solve what they feel may be immediate or head off what they feel are long term problems – but in the end – lend to increased noise or babel or diaspora of toolsets.

Now, I’m not saying that we should not develop tools that solve problems. 

But Docker did create Docker Swarm and Docker-Compose which accomplish everything that Kubernetes tried to arrange – but did it as individual components – not as a overall system.

One must remember that a container is a level of abstraction.  And Kubernetes is a container management system often ran inside virtual machines to control containers in a virtual way.

Kubernetes, to me, is a “second-system” anti-pattern.  

And it’s already suffering from it. 

Helm, the most adopted “configuration” toolset for Kubernetes, has the same issues as older “docker-compose” or “Dockerfile” scripting in that often suffer incompatibilities or deprecated syntaxial issues as the interpreting application evolves which Docker in it’s credit gives better backwards compatibility by versioning the intended “docker-compose.yaml” language version.  


Unix and it’s derivatives was built upon, and it’s greatest asset, is that it both:

  • Sees everything as a file
  • Is an operating system built on tools (binaries/libraries) that utilize other tools (binaries/libraries) all contained within the system that it sees as files. 

When we begin to design tools that walk away from the original design aesthetics of the operating system, things tend to go wrong.

Bringing the Unix Philosophy to the 21st Century - Brazil's Blog

]]>
Internet: Good/Bad? https://greyhatsage.com/internet-good-bad/ Sun, 18 Feb 2024 09:36:24 +0000 https://greyhatsage.com/?p=719
 

30 years.  Wait, that’s not right…


30 YEARS!!!


Spinning around a fireball on a ball of mud that is also spinning around a deep dark funnel of gravity as its play thing…

Thirty (30) years is insignificant to the Universe that has Carl Sagan “Billions and Billions” voice over behind it, but as creatures who’s lives are finite and infinitesimal small compared to the Universe or even the star we orbit let alone the ball of mud we call “home”. 

Thirty (30) years – while a blink of an eye to a tree or turtle – is incredibly long to an insect or even a human child.

So what have we learned after being connected globally and sharing one common network, in mass, for thirty (30) years?

Let’s go back then… 🙂

 

Then:  Internet – 30 year ago


Yes, Virginia.  There was an Internet prior to 30 years ago.  The term was coined much earlier but even 30 years ago, the basics and concepts of the “World Wide Web (WWW)” were invented at CERN in Geneva, Switzerland by Tim Berners-Lee.

 File:First Web Server.jpg - Wikipedia

But it did have a predecessor in hypertext and hyperlinks:  Gopher

The Gopher Project: Early Internet and U of M Libraries | Minitex

Gopher was a console based (as Windows 95 didn’t exist quite yet, ,we had Windows 3.0/3.1 in 1994 for the consumer) as most of our connection to the Internet wasn’t direct.  AOL did not exist quite yet and commercial access was really not as ubiquitous as it is today with it on everyone’s smart phones in which it is said that there are 2-3 smart phones for every person on the planet at this time.

There are now more mobile phones than people in the world | World Economic Forum

And when I say direct access, I don’t even mean a browser.  I mean text which did not require a TCP/IP stack on the users computer.  That was taken care of by a central computer that everyone logged into.  We either were hardwired into or had to dial into what are called “Terminal Servers” and those would then present options for you to access various computing resources if you were allowed.

Prior to that:  You had X.25 Packet Data Networks for commercialized data networks that were cheaper than the Regional Bell Operating Companies (RBOC) which were the result of AT&T/Bell System being broken up after being deemed a monopoly – which prior to the breakup of Bell System, voice calls and data were even more expensive.

https://oldvcr.blogspot.com/2022/04/tonight-were-gonna-log-on-like-its-1979.html

With the breakup of Bell System, it allowed other third-party “common carriers” to enter and attach to the existing telephone system and began to force the cost downward due to the competition in the market.  Companies like Sprint and MCI were major players in lowering the costs to the consumer.

But, if you wanted to avoid a long distance bill (distances greater than 50-100 miles from calling city), you basically were stuck with local calling which was at a more “flat” rate as part of having telephone service.

Elvis Presley | Phone Bill | July 15, 1968

This is another story in itself, but I firmly believe this is the missing piece of the Internet.  But this is going to be a key piece to keep in mind for the rest of this story.

 
 
 

Now:  Internet – present time


Over the past 30 years, I’ve seen the perversion of what was a once very intelligent community of personages devolve into a bizzarro world where “truth” is now a “dirty” term.

It (the Internet) is used as a mechanism of spreading false information to people who either cannot or do not want to do research into validation, believing every piece of data or information they receive as “God given with irrefutable accuracy” and then pass on those same false stories to their own sphere of influence for them to inseminate with others. 

Part of this, I assert, comes from a self-image that those who are “righteous” are equally deemed to be “empowered” to push their beliefs upon others as a “moral calling” by their higher power without regard of the others views in effort to save someone from their “misguidedness”.  

This “self-righteousness” is incorrect and incompatible with modern society – and as hypocritical as Humans can be in social organization – fail to recognize this as sociological fact.

See officer, they are wearing straightjackets and everything.

[gotta love “Superdickery” – https://superdickery.tumblr.com/post/24832140635/see-officer-they-are-wearing-straightjackets-and ]

While I do not advocate for punishing those who are “inaccurate” in the information they dictate, they do – after all – have a right to speak their mind, we do not do enough to retort such informational inaccuracies.  In fact, there are those who will even go out of our way to punish those who use their rights to *not* speak in possibly aggressive ways! 

Again, I believe this is an apathetical and punitive response to those who are contrary in one’s viewpoint.

This does not mean that one should be intelligent or have a high level of education before they can engage in debate.  But those that do debate, should be tolerant and open to others view points before dismissal or punishment.  Those that listen to debate should do more than topically take in the banter and the reason such a debate occurred in the first place to ensure the serendipity of such a question is without ulterior motive.

And this, I believe, is the major difference between the Internet of “Then” and the Internet of “Today” – the lack of challenging sources of data, critical thinking of the data and overall media literacy. 

This was all present, as a collective whole, on the early Internet – as the network *then* was mostly utilized and consumed by educated individuals who have at least a base level of critical thinking skills taught and required in academia and research.  After all, it was built as a research network for the Department of Defense (ARPANet) and the National Science Foundation (NSFNet).

Our common practice in using the Scientific Methodology to challenge assertions and facts using data and research that stands up to peer-review was crucial in building collective consensus to build and collaborate on the Internet. 

It’s still crucial now – more than ever.

But the “Social” Internet does not require such levels of rigor or review.  Its focus – pure and simple – is for people to be “publishing” content as a way of attracting people to view the content for the sole purpose of minimal engagement to sell advertising. 

While “Social” is more open and cheaper to the consumer than the privileged and costly halls of academia, it also lacks the interactivity in lieu of passive consumption and minimal feedback for our brief attention and engagements.

This, I assert, breeds negative behaviors we objectively would not want to occur such as apathy with life and their role within it, indifference to others and mob mentality just to regain the sense of “belonging” due to the lack of interaction.

Disinformation and Abuse 

of a communication commons


The Internet of yesteryear definitely was a more “interactive” space.  Today’s Internet lacks this interactivity due to subdivisions in our discourse and interests.

While the Internet of yesteryear certainly had “passive” content, it also had way more interaction due to limited options for networking with others. 

Today, we have multiple social networks we join that harvest our interests and intents for marketing purposes, but has reduced the interaction to a brief comment or even briefer “emoji” icon.  Minimalist may rejoice in the brevity of the interaction, but many seek out greater engagement with one another causing us to all be “content creators” of one sort or another.

We are also now exposed to more of the human psyche in all of it’s diversity than ever before. 

This is both enlightening and horrifying at the same time. 

We have the ability to learn new concepts and skills that align with our interests at any given moment in time to be more intelligent in our efforts.  The counterpoint to that is we are now exposed to more of the human depravities and violence of our being including being silent witnesses to those acts in real time without time to consider the event unfolding before our eyes.

 
We have become the audience and silent participants to our real human natures: both in our discourse with one another, the division and divisiveness we experience as a society, and the subjacent of one class of humans upon another without the ability or capability of recourse to being witness of the events.
 
What shall we do to solve this problem?
 

[ChatGPT responds with editor broader edits:]

Now, more than ever, underscores the need for (digital) literacy and critical thinking skills to navigate the online realm effectively.

Addressing societal divisions and empowering individuals to actively participate in online discourse are crucial steps towards creating a more constructive society whether online or in real life.

Achieving this balance requires promoting mindful technology use and fostering empathy, ultimately facilitating a more meaningful global society without denigrating anyone’s beliefs or values.

 
]]>
Do we live in a safer place? Yes. But no… https://greyhatsage.com/do-we-live-in-a-safer-place-yes-but-no/ Sat, 17 Feb 2024 08:00:57 +0000 https://greyhatsage.com/?p=645

Preface:

If you have read my original site at https://00100100.net then you know that 30 years ago, my life took a turn down the crime and punishment aspect of computer security.

Ironically, I am typing this up on the exact 30th anniversary of that occasion. Whether it publishes up on the same date, I do not know.  But I am writing the first post on that auspicious day and here you are reading this.  So cheers!

Then and now – there are people who do know the dangers of the world. And there are those who do not.

Greatly, from my observations over the past 30 years, most people in the world fall into the latter.

No matter what you say or do, people do not want to think about it.  Or, more importantly, they do not care to be troubled with it.  They feel that is the role of law enforcement because why else would they pay their taxes.

However, when you look at those who work in that realm of employment, that is not what you will get.  A small percentage of those on any “force” (whether it be local, national or specialized), have any training into investigations – let alone training into investigations into “cyber” crimes.

The computer security industry is filled with ex-law enforcement/ex-military types who join companies in the belief that they know what is best to protect people.  But, do they?

Then: What is a “Hacker” – 30 years ago and earlier.

In the documentary “Revolution OS“, one of the people who did have great influence into how operating systems were built and design was Richard Stallman (rms).  In the documentary, he saw “passwords” as a means of control over the users and the power they had to control computing resources.  If you have never seen the documentary, you will likely come away with the impression that Stallman is very much more “anarchistic” in world view than say, Linus Torvalds who wrote and developed the operating system we now know as Linux (or GNU/Linux).

Considering that until the late 1990s, most people saw computers as a “fad”, computer security was greatly lacking prior to the commercial inception of the Internet.

As a person who was present to the online world both pre-inception and post-inception of this commercial Internet, I was there and experienced life then – and now, so have that baseline of knowledge to compare the two.  Without “Hackers” or kids like myself who didn’t have malicious intent for the most part – who actually did have a “code of ethics” or mindset that differs from “hackers” sensationalized by today’s news media as “criminals” – indeed are a different breed of personage

I, personally stand, on the same side that “Hackers” being that is *not criminal*.  I see it as the same argument as what defines a group of people criminal for being “skateboarders” or in a “gang“.

Often, enforcement of laws in the legal realm, is reduced to interpretation of the statues and often is not as clear as a binary (yes/no) definition to the intention of the legislation passed by law makers.  Many times, the laws legislated and passed are ill fitting either the intent of the legislation or the retribution asked of such laws if not in both aspects.  But, I will table this perception to another time with the exception of:

There is a difference in intent regarding those who push the limits of legal or technical definitions, and those who are intentional in their motives to defraud or steal for the profit motive.

Case in point:  The current copyright law makes it illegal to “share” copyrighted music or movies in the United States.  But it is NOT ILLEGAL to receive or DOWNLOAD such material.  Therefore, it is counter to what we are taught from kindergarten to the public commons that “sharing” is “good”.  Food for thought.

(and a tip of the hat to “RMS” for pointing that out in “Revolution OS” when he describes the Free Software Movement (FSM/FSF)).

Now: What is a “Hacker” – present day

A “Hacker” is a person who could be considered a “Subject Matter Expert” or holds an “advanced” level of knowledge in a particular field of study.  More to the point, a “Hacker” often is a person who is passionate about an aspect of (but not limited to) technology, engineering, art, science or mathematics *but* likely without the formal education and training that may have been involved in creating or inventing the aspect they are working within.  Often times, it’s a self-imposed title, but credibility does go up as you become noted by the world or peers as being that “expert” in that subject matter.

What differentiates hackers from criminals that use computers?  The intent to cause harm to others that are not themselves for profit motives.

Now, I know that some will take up issue with that viewpoint stating that it by bypassing DRM to store media in a manner that was not explicit in the publishers intent, impacts the companies and shareholders who are the “others” that depend on said revenue incomes.

I would then say that there are whole generations of people who copied and made “mix” tapes whether the source was from legitimately owned media, radio recordings made over public airwaves and even more to the point today – sampling of small pieces of sound or video to assist in reimagination into a new piece of work.

There is a whole framework for royalties in place.

While still mostly not being as fair to the artist than the publisher, the fact is that there is no direct accounting of a license purchased by a consumer and a publisher that takes into account the form of media across technologies.

How many copies of the Beatles “White Album” were purchased on Vinyl, then to Cassette, then to CD?  Same recording, different technology yet I already likely purchased the license at the beginning and just made it easier for me to have to repurchase the media due to destruction of the media or theft.

Would one say it’s fair to pay full price for the album again?  Or just the reimbursement costs of the media?

This same argument applies to computer software:  What is the definition of software piracy?  From what I have read, it’s mostly to prevent profiting from the original companies work efforts.  But where is the profiting happening by outside parties?  Is it the person-to-person distribution of media gratis?  Or is it the distribution of the media sold by one company to another for profit?

Yet, the SPA spent money and effort to promote the idea that “sharing is bad” with their “Don’t copy that floppy” campaign.  Today, thanks to people who decided that *all* the material is worth saving from destruction, we do have a replacement copy much of the digital media created that was sold to consumers.  Work efforts and artistry by amateur “archivists” that is not lost to the sands of time because it didn’t make the sales quota and is dumped into landfills.

Let’s not forget, while the industry may have ownership of materials, their master copy can be destroyed just as easily as physical aspects of the Hawaiian native heritage being destroyed for the rest of time.  A distributed archive prevents this becoming the end of the story for this topic.

So – now we know a “Hacker” is different from a “criminal”.  How am I safer and how could I be safer?

The world 30 years ago, I describe to people being very much like the world in the 1984 movie “Wargames” with Matthew Broadrick and Ally Sheedy.

If you’ve ever forgotten your password, you know that you can stumble around trying to remember and possibly get locked out.  Or maybe you remember it fairly quickly after a couple tries and then are let into your email or social media.  That’s pretty much what it is like – then.  Even as a person who didn’t have any knowledge of the company or organization.  Default accounts were often still present if they weren’t even just cursorily guarded.  People’s information was out there and was available to anyone who knew how.

Today:  A bit more difficult but sadly, still possible.

When I began my professional career, I interviewed at many places.  One of them was at a well known national banking chain who was beginning to leverage technology in a bigger way.  I met with the head of their “computer security” and during the interview it was revealed to me that “while I had skills, the industry itself did not value them” at that time.

More to the point: Computer security was then (and  often now) seen as a function of “Insurance” and not of loss prevention or brand reputation.

This all changed in the mid 2000s when data breaches and massive data leaks (which still happen today) became more frequent and public and a whole industry sprang up from that outcry of consumers demanding “social responsibility” to the consumers and to the shareholders not wishing to lose profits from “loss mitigation” whether it be from identity theft or investment of companies who did not protect the private data of their customers.

Another case in point:  Robocallers still to this day may use another’s “telephone number” as the ‘origin’ of a phone call, making you believe it is one thing then turning out to be something else.  The flaw in the phone company that allows for this has been present in the telephone system since the inception of “Caller ID” to the consumer back in the 1980s.  The flaw was known the entire time, but there was no motive to fix this until robocalling became a nuisance in the mid-late 2010s.  The problem this causes was greatly reduced by the implementation of “Shaken/Stir” validation with phone carriers just a few years ago, but the problem ultimately still exists and will continue to in corner cases for the foreseeable future.

It is cheaper for companies to generate the “perception” of ‘fixing’ a data breach by offering identity monitoring as a token gesture along with forward press releases of how they intent to prevent an issue in the future, but once the damage is done – it is done.

Today we build incredibly complex technologies and often times these systems, built by humans, make mistakes.

That is the world we live within.

We cannot take for granted that if give our information or our business to a company, assume they are *always* doing the “right” thing with our data and information.  We see it in how we are allowed to use these “social” platforms or services for “free” then find ourselves being bombarded by “targeted advertising” even if we are not the ones who asked for the advertising in the first place.

What we do is now the profit motive for companies.  Not what we want to do, as often the goals of the parties involved, are not in alignment.

Artificial Intelligence (AI) cannot solve this issue as those who build the technology often times are banking on this same information.  Google built it’s entire company in the beginning based on what we are searching for being profitable.  Today, it is one of the biggest in the world and it is not alone.

There will always been a battle between “right for the user” and “right for the company/shareholder” when there is a profit motive involved.  And the solution to the problem, is not a simple one without legislation such as GDPR or other privacy laws being enacted in some states.  Much is still to be done.

Todays “criminals” are often targeting those not within their own personal societies and most always for a profit motive if it is not for social/political reasons. Our country has a history of rebellion we both condemn and encourage in the same breath – one of our social hypocrisies.

Notwithstanding the latter which you may see as “defacement” of websites, we *are* safer by having stronger encryption which would *not* have happened without the implementation of a public computer network such as the Internet, more accountability as consumers to companies we do business with or hold our information and allowances of “Hackers” who test technologies limitations up to the edge of the legal “criminal/civil” definitions to ensure our world is in fact a safer place.

This knowledge is often not kept secret, but shared so that others may also learn and discover to ensure their own works are safe for consumers as ultimately, I am in the belief, that we do want to do good works for the betterment of society but are cognizant that there will be those who wish only the betterment of themselves.

This is the difference between a “Hacker” and a “criminal” in my opinion and how we now live in a safer place.

This includes such acts by Chelsea Manning and Edward Snowden.  Transparency within our society provides clarity and visibility of those who represent us as a people and what we hold association with.  Without transparency, we will never know whether those who we put in the position to protect or govern us are in fact representing our best interests as a people or society in general.

 
]]>