Environment Harper Canada changed their page layout around mid-October; the feed has been down since that time.
I've finally finished rewriting the data-capture tools. There are two of them: wxhtml, which pulls observations, statistics and the forecast
out of the page; and wxrec, which harvests data (normals and records) from the records page for a given location. I'm pleased to report that
they largely work as advertised, again. One little bug to iron out with nightly forecasts--should get it tonight.
Regional and National feeds have been restored.
Hopefully, it will be some time before another page redesign.
Incidentally, as you can see from the Capture Accuracy indicator, I'm still using the OCR system to capture local data. It's working quite
well, and provisioning a windows machine 24/7 just for data capture is too much. I'm not going to use the automatic data collector dongle.
Also incidentally, I'm working on adding a couple of new features to MCH. One will calculate normals for a given range of dates and also
report back on whether they meet the WMO criteria for normals. The other will recalculate records. Expect these new features in the next couple
of weeks.
Lord dying, there's days when I just want to pull my hair out.
You may have noticed the other day that readings got really screwy for some hours. It all started with adding a new feature to the
OCR program the system uses for local data capture. Well, I decided I wanted to add math parameters; adjust values before output.
Now, y'know me by now; I like to KISS, wherever and whenever possible. I added parameter support, and it just worked. Happy, I let the
new release loose. Within 15 minutes or so, it was applying the adjustment I had specified for the barometric pressure, to everything!
Out come the development tools. And, a whole bunch of hair-pulling later, a version of the software which would do as it was intended to
do.
New equipment arrived last night; a Lacrosse 2811U. Setup complete.
Wind and precip readings (such as they are) are restored. And, should I choose to use it, it comes with data-capture capability to a
PC. As it stands, the OCR software is capturing nigh-perfectly; might not choose to use it.
I really should have set up my test environment long ago. It just makes things so much easier.
Bugfix: changed how warnings were timing out. The upshot is that YOW weather warnings will again be replicated on the CTO weather screen, and
all warnings should time out after six hours. That figure may be adjusted.
I realized recently that the code needs a bit more work before I release it. To this end, I'm creating a test environment,
should have, in fact, long ago.
I've provisioned a virtual machine and installed my favourite brand of Linux, Debian. I'm in the process of setting it up; then I'll have a
proper test environment for my software.
Of course, I've fashioned a shell script to divert all the incoming data to the virtual machine (as well as the eerver). There are numerous
points where files are moved around; a single DIVERT command sets this up automatically.
I'm liking this very much. I can now play with my software in a stable test environment until I get it right, and the 'official' server just
runs and does its job; no fuss, no muss. as they say.
About two weeks ago, I had Bell Fibe installed; TV, Internet, home phone. Good deal; beats the hell out of the Rogers
service we've been stuck with for years.
There have been several side effects. First off, the standard install comes with a wireless router, so I gradually got all my network switched
over to that. That went relatively painlessly, though there were a few hiccoughs around changing over from a 192.168.* schema to a 10.0.*.--mostly
with the wireless TV receivers. But I got there in the end.
The second was a little tougher to figure out. As you know, I keep a dynamic hostname to allow Internet access to my site; it's updated
automatically by a client, and it always seems to work.
For a few days, it stopped working. This was partially my fault; I had had my old router doing the updating, and it wasn't getting the right
address. I switched over to the new router, and it started working. In the meantime, there was a problem with my old router insisting on behaving
like a router and not a simple network hub. So off to buy a network switch.
I got the switch installed; bam--up comes the Internet. Problem: I can't access the dynamic hostname. Later in the day, from work, it was working.
When I got home, it wasn't working again. Curiously, this happened for the next two days, until I figured it out: locally, the system was no longer
allowing access through my dynamic hostname. I pointed all my browsers to the source machine--gypsy--instead, and all was working.
The last change I made was to my custom homepage (http://mizar64.dyndns-home.com:64180/home.htm
if you're interested), to allow access to the Raspberry Pi from home or work. It's kludgy, but it works.
Incidentally, I'm working again and temporarily taking a break from developing (I'm a developer professionally, so I've got my eyes full of Java
code during the daytime). I'll get back to it in a couple more weeks.
The past few days, readings have been sparse. The OCR program was crashing out, and I didn't have time to see why.
I finally got a look a few moments ago and immediately noticed why: one of the elements of the OCR program is the ability to define
'validation mappings' for each character. In other words, this combination of elements lit up equals character X. I had originally allocated
for 12 validation mappings per character; but recently I realized that the OCR program could actually do error-corrections: if a certain pixel
is added to or removed from certain characters, a distinctive pattern appears. So why not harness that and make the OCR program much more
fault-tolerant? Well, it turned out that in adding the additional mappings, I overshot the limit of 12. Easy fix; five minutes.
The lesson to be learned here? Always allocate amply for your data needs.
'Working on the site aesthetics' meant 'giving all the pages a makeover and enforcing more CSS compliance.' Tis nearly
done, now; just a few pages to polish off.
I've been working on documenting the system, prior to open-sourcing it. I'm up to about 60 pages so far and have barely begun documenting all
the little programs that help make up the system.
I've decided to open-source the system; it's matured to that point, I think. So time now to turn it loose and see
if the public chews.
Not much development work going on lately; with the addition of the ability to detect whether a tag has data, or not, it does everything
I want of it. I'm only working on the site aesthetics, now.
To that end, I've been working on documenting the entire system, including something like 7 support programs. I'm up to 41 pages, now,
and still have probably 20 to go. Then I've got to invent and document the install process (the original system evolved in-place); and
I've provisioned a machine for that.
I'd estimate that the code will go up on SourceForge within the next couple of weeks, if I don't find work; or a couple more if I do.
Our outage the other day turned out to be more major. In the act of replacing the batteries on the receiver,
I managed to break the thing; it just resets now, over and over.
We're now running on backup sensors, which means no precip or wind data. Until I'm working again, and can afford a replacement, this is
indefinite.