... when I used something similar like REST API-ish data delivery, GitOps-ish webpage release delivery, ChatOps-ish server control or scraping data from a public web page, to collect data for a bail bonding company. Well let me tell you 4 little stories in the next weeks and you decide at the end.
So in the late 1990's I was employed at a large german consulting company as a technical consultant and as you might suspect working at a year 2000 project at a large insurance company in Germany. The insurance company owned a lot of smaller insurance companies. All companies had their independent and sometimes very small IT departments. So the board decided to centralize their IT and gather all the IT resources and departments into a centralized IT department.
Our company was hired to do that and also make sure that all their little software solutions where ready for year 2000 and could be monitored and managed from that central IT department. My part as technical consultant with two other colleagues (a Unix admin and an Oracle specialist) was to build a centralized monitoring system, so the centralized IT department had an overview what versions of OS, 3rd party components, special insurance software was installed and other interesting key values, like disk space aso.
So at first we came up with a centralized system that had to log in to the about 200 servers (windows, unix and linux) to gather the needed data, put it to a Oracle database and then generate a bunch of HTML sites for an apache HTTP server.
But soon we found out, that it was simply to hard and unreliable to do that. So we came up with a different solution. We implemented 3 universal (for windows, unix and linux) ftp upload and data gathering scripts. And for each server- and application type wie customized those scripts. Then we made sure, that on each of the about 200 servers these "agent" scripts where deployed and run on a scheduled time.
After some hickups, the agents delivered their information reliable to a ftp server in a folder structure. Then I wrote a script which generated out of the data files html files in a folder structure, so you could access the information over an apache HTTP server. E.g. server/company/group/server/info.html
The html files where actually not html files. They used actually the INI format, so data was stored like this:
[APPLICATIONS] APP1=Insurance Claimer APP2=... [APP1] Name=Insurance Claimer Version=184.108.40.206 Installed=1998-12-01 Server=yyyyy ...
So you had an INI file displayed in the browser. Later we built an better web page which consumed the data files and built something like you have in Incinga and co.
Today, you would use a modern Monitoring solution, maybe REST api agents and JSON for that, but back then most of the technologies we use today, where new or not available.
- Nagios was released in 1999 (as of german wiki): Nagios(DE), Nagios(EN)
- REST api was released around 2000 REST api
- Apache was release in 1995 Apache HTTP Server
- JSON Format was released around 2001 JSON
So for 1998/1999 I still think it was a nice and pretty sophisticated solution. Sure back then there where for sure similar solution, maybe even commercially available ones. But we were either not aware of it or couldn't use them, and of course Open Source wasn't such a thing back then. But was I and my colleagues pioneers?
Well, let me tell you three more stories in the next three weeks and the you can decide if I was a pioneer or just applied common and practical IT engineering skills for a simple and common IT problem, hence nothing special I should be proud of.
As always apply this rule: "Questions, feel free to ask. If you have ideas or find errors, mistakes, problems or other things which bother or enjoy you, use your common sense and be a self-reliant human being."
Have a good one. Alex