A huge array of Personal Weather Stations is what makes Weather Underground (aka WUnderground) be so much more precise than any other weather service. In fact, the company’s forecasts are so precise (and cheap?), that “even” Google chose them to be their primary source for searches and other weather related cloud services (i.e. see http://www.google.com/intl/en/help/features_list.html#weather). But while there are “more than 25,000 weather stations across the globe [that] already send data to Weather Underground“, there are still many areas of the world that are not properly covered by this extensive network. In my case the topography of the region is so extreme, that the nearest station was constantly off by up to 15 degrees. Needless to say, even the next day forecasts were all over the place.
Introducing (cheap consumer grade) La Crosse Technology Wireless Weather Station with La Crosse Alerts (aka C84612) being sold at offline Costco stores. The station costs roughly $80 and comes with wireless wind (speed, gust, direction), rain (still not working for me), temperature, humidity, and pressure sensors, as well as a mediocre dashboard panel, and an “ internet gateway” (connects the panel to La Crosse propitiatory website). This station is capable of properly reporting current information in close to real-time onto the dashboard and roughly every 20 minutes to the attractive La Crosse Alerts secured website.
The code below can be reused to connect other types of weather stations, but it was originally designed to utilize undocumented features of the La Crosse Technology Wireless Weather Station with La Crosse Alerts (aka C84612).
Full tutorial is now available as a separate article “(HACKING) PROFESSIONAL WEATHER STATION FOR UNDER $100“.
Temp path moved to a variable instead (thank you, Mara and Ford)
Down alert default changed to 2 hours (La Cross internet update seems to be down just over 1 hour quite often)
Check for station itself being down (outside temperature reported exactly at 32 and dew point of 0)
Running script with ?output at the end would produce an output each time - helps troubleshoot otherwise successful runs
Info on delay in hours since last update always available on errors
Insert..Select is a great command, but sometimes you just need more- sometimes you need to return bits of the information back.
The first thing you will attempt, of course, is to declare the few variables you need. Yes, you will see that you can indeed do that, along with updating columns, all in one call- neat!
[syntax_prettify linenums=”linenums”][email protected] = ID,OldValue = OriginalTable.ValueSELECTValueFROM OriginalTableWHEREOriginalTable.AddDate < GETDATE() - 30[/syntax_prettify]
Unfortunately not only other types of statements (i.e. delete and insert) will fail to do that, but your variable(s) will hold only one value. Yes, you could concatenate it…but are you really going to be doing this crazy 20th century workaround nonsense again? Our old OUTPUT keyword to the rescue- it redirects the processed information back to where you tell it, including a variable table that you might have created just for that purpose.
We’ve already talked about batch processing in SQL - Cursor and Batching, but it’s not just human labor that batching helps us with. One of the best ways to optimize any type of coding is to limit the amount of hits it has to do against storage (physical or otherwise). It could come at an expense of more CPU cycles, so proper compromises need to be made to ensure true performance improvements.
In case of SQL, this is usually achieved by limiting the number of separate calls to the databases/tables and finally limiting the number of requests themselves (select/insert/update/delete), wrapping them into as few as possible… Oh, how many times have I seen very simple procedures with cursors or even just check requests (ie IF EXISTS), followed by possible change requests, followed yet again by closing check or pull requests. Why would you do that if you can easily do all of this at once and allow the engine to properly optimize out of the box?? The harsh cultural weight of our 20th century coding backgrounds, perhaps?
Somewhat surprisingly, batch processing stays as one of the most gaping holes in DB power-users’ knowledgebases… Processing records one by one, searching, fetching, updating- they seem to be fine with all of this no matter the type of the DB, but there is something about batching that many do not seem to grasp. What’s worse, is that they don’t even KNOW that they are really missing out…