Another trip through the mirror

The problem

In our new application, we are integrating Logi Info, a reporting UI tool, to handle a page of interactive reports. The software is essentially aimed at doing the same thing we are, but is easier to deal with and offers more advanced options out-of-the-box since it’s a specialized tool rather than a coding framework. This process relies on embedding their reports into our own Dashboard.

Logi proved to be a persistent thorn in our sides for some time. The tool was adopted thought the company and while other teams were getting it to work, we were hitting brick walls over and over largely due to complexity of our environment. The first issue was getting it to embed at all, which required a bit of finagling and using a secondary more complex method of embedding when the simpler HTML static embedding didn’t quiet work out. After that we had to deal with the issue of getting the server to talk to our API. Our API uses a security token and we had to transfer that token from the user’s local machine running JavaScript back to the Logi Server sitting on our end so it could authenticate with the web API sitting on the same server. (Yes, we had to have the server send a token to the user so it could be sent back to us so the server could talk to itself. Programming is weird). Logi has parameter passing and can be configured for HTTPS to allow for secure transport of the token, so it seemed like an easy task. Then came our protracted battle with the security token.

Left my golden token in my other jacket

In testing, we could pass a token to Logi, have it render a debug report to tell use what it received, and see the token being passed. Yet, we were seemingly randomly getting 403 errors for an incorrect token. In addition, Logi seemed to render stable reports but when trying to summon up debug information in the report, we keep getting the equivalent of 404 errors for missing debug files. We had a contractor that had worked for Logi Analyticals, (who own Logi Info), previously working with us, but his focus was on the design aspect and he did not have a solution for us, even after talking to his contact in the company. A such we set about trying to figure out the problem over the next couple of months.

This process was frustrated by having several solution that appeared to work temporarily but later showed the same seeming random behavior of missing files or dropping the token inconsistently. We figure that perhaps the token might be being dropped because it was being stored in a temporary or semi-temporary variable on the server, so we had it pass the token to another variable in the user session. That worked for a day, then the problems started again. We noticed that the token was being passed correctly every time, but sometimes it did not seem to show up in debug when the report was actually render, so another developer wrote a plugin that set the token value when the user session was created, right after it got the token and before the report began to render. This worked for a few days, then the problems appeared again. We though it might be an issue with user sessions since certain update information that we were passing to the server was not actually getting updated properly, so we put effort into trying to invalidate sessions immediately and going to a session-less mode. This is only quasi-possible in Logi anyways and once again did not solve things.

From hell’s heart I stab at thee

Around this time, the task of working on Logi came back around to me as my primary task when another developer had other pressing tasks. After having a stare down for some time while I tired to wrap my head around what could be going on, I noticed an issue that seemed to explain our missing token. Sometimes when the browser made a report request it would get a response and sometimes it would get a 302 redirect and then get a report. This seemed to correspond to when Logi would hiccup and lose information. After looking thought the packets to see if this was significance, I noted that the 302 redirects seemed to include only some of the parameters we were passing to the report request. After a quick Wikipedia search, I found that 302 redirects, by their very nature, drop all post request information. The partial information that was being written to the new URL must be Logi partially translating the post for the redirect.

This seemed odd and looked like a problem with Logi itself. Since this behavior was buried in some DLL somewhere, I wouldn’t have much recourse but to simply file a ticket and see what happened. Being leery of posting tickets for only half investigated issues (don’t you hate those?), I continued to look into this. After asking myself what could cause the website to want to redirect rather than just send back the data, I remembered that for us the application continuous loaded data slowly while other departments reported that the reports would load slowly once and then cache and run more quickly. Figuring that a bad cache mechanism may be deleting some data and forcing the report to run again (and thus redirect when the original vanished) and cause this issue, I delved into Logi’s cache files. Sure enough, the cache files were appearing during rendering and rapidly vanishing when the report rendered. The documentation I found noted that these temp files should stick around for at least 1 hour by default and only have garbage collection once every 5 minutes.

This cache issue explained our missing debug files (which would be in these temp files that were deleted erroneously), and the mysterious 302s that were dropping our token and causing our 403s. After working with some of IIS’s settings to see if Windows was not caching the files due to size restrictions, None of the settings seemed to affect anything. At this point we submitted a ticket to Logi, but this time we had a specific problem that we could describe and had narrowed down our issue to one specific thing. Sure enough, the support team actually recognized this issue and directed us to a patch for a bug in the version of Windows our server was running. We applied the patch and we finally, finally fixed the issue.

In conclusion

I’m not sure what, if any, moral there is to this story, aside from the fact that sometimes chasing bugs is like having a serpent drag you though wormholes across the universe. Perhaps, its just that these trips down the rabbit hole suck but you’re guaranteed to learn something you never knew you wanted to know.

Making CSS mud hills

I was recently asked to work on the UI and UX portion of our new product; colors, menu sizes, location, that sort of thing. This is all CSS work. The extent of my work with CSS up to that point was almost exclusively adding a few style tags to adjust the placement of elements on the webpage and such. I’m no stranger to baptism by fire since a lot of the skills I use on the job were thing I learned after being told to do them. *cough* JavaScript *cough* After messing around with it for a bit, I started to get the handle of the level of prescience of the various CSS items and how things overwrote each other and I began to find some of the interesting properties that can by styled in addition to things like media queries that can give some conditional styling.

I was, however, filled with the overwhelming since that I wasn’t doing it the ‘right’ way; That my style-sheets were going to be super sloppy and a complete mess to someone who knows what they are doing. I got that sense that I used to get when coding that I was making it work, but doing it wrong, and I had no idea what the right way was. I have seen some of the horror shows that sloppy coding can bring about, and I have the feeling my work may wind up as a CSS version of that. None-the-less, as always, it’s a new skill that I picked up and can do (if not super well) if there is a need. (Of course, you really really should hire an actual UX person when you are designing this stuff and not rely on programmers to do this. Programmers are not artists and artists are not programmers).

Menu Compare

Who downloads the downloader?

The problem

I recently had a technical issue where I needed to access an application using an Android tablet (don’t ask, just roll with it). My first line of attack was to simply try to use my Fire tablet to access the application. Apparently the fire OS has been determined to not be Android enough to qualify as Android anymore so that was a bust. I don’t really have ready access to any other Android devices, and I wasn’t going to buy one simply to address a small technical problem.

Luckily, I did some dev work on an android app a while ago, so I happen to know that the Android Studio application comes with everything you need to easily run virtual Android devices for debugging purposes. In this particular case, a virtual device is fine, I just need the application to register as talking to an android tablet. After opening my old install of studio (and waiting for it to run through updates), I was in business and set up a virtual device to test with. It only took me a few minutes to realize that the virtual device did not have the Play Store app installed.

Into the quagmire

It seemed like even a virtual device meant for testing should at least be able to connect to the play store for more complex debugging then simply running smoke tests, but Play Store simply was not there. After briefly having an existential crisis over the idea of ‘how do you download the thing that lets you download other things?’, I figured that it should be possible to download the Play Store though a browser on the very off chance that someone managed to delete it from their phone. After playing around with the Play Store webpage through a browser, which the emulator thankfully does have, (and managing to somehow download an application onto my actual phone), I wasn’t really getting anywhere.

I figure that, as with most things in life, I had done something stupid and screwed it up, so I tried searching to see if there was a simple step to get the Play Store on an emulator that I had missed. I quickly realized that, although this seems to be a common problem, the emulator is simply not designed to support using the Play Store. There were various solutions that people posed, like loading the data into the emulator file, but they were all too complex for what I though would be a simple task.

Luckily, one helpful poster on Stack Overflow noted that the most recent versions of Android studio were updated to include having the Play Store on the emulator. I hit up the site for studio and, sure enough, that was in one of there most recent news postings. I noted that there version number associated with this change was higher then the version of studio that I had installed so I went back and checked for any updates. I had the latest version and there were no updates. After reading the news store more closely (reading is fundamental, kids), I saw that the build was not yet a stable build and was part of the beta builds they publish. After monkeying with the setting in studio a bit, I discovered a section for ‘updates’ that specifically noted which types of builds you want to use. Stable did not have the build I needed, and even beta did not have it, so I turned to the every sketchy and crash-prone Dev build.

After going through the song and dance of manually grabbing the zip and extracting it (since the dev branch is meant to run separately from a regular studio build since it is crash prone), I was back in business and set up another virtual device. After several minutes of waiting for the emulation slowed device to boot, I looked around and found… that the Play Store still wasn’t there. After some more searches, I found out that only some of the devices profiles on the emulator come with the play store installed (noted by a helpful icon in a column labeled ‘Play Store’). I looked at the list of profiles and as luck would have it… not a single tablet profile has the Play Store. At the moment it seems to be limited to Android Wear and the Nexus 5 phone profiles.

In the end…

The upshot of this whole experience is that Android Studio will likely soon have a stable build that can create virtual devices with the Play Store, but it’s still under development right now and it seems to be fairly limited in the number of virtual devices it does this for at the moment. This makes my ‘simple’ technical problem a headache, but at least there will be an easy solution available long after I no longer need it.

Bulking Up

The problem

A common problem I’ve run across is that people will have these elaborate spreadsheets in Excel that contain all of their data that they need to run things. Excel is the poor man’s database, but people prefer its interface over the challenge of having to learn some SQL. (Let’s ignore the fact that learning to manipulate Excel can take just as long if not longer then learning some SQL). I’ve spent some time thinking about various ways that I could import spreadsheet data into a DB and show people how easy it is to switch and do basic manipulation to get them interested. Although I have yet to figure out how to import calculations and elaborate cross-spreadsheet connections, I have figure out import a lot of basic Excel data into a SQL server.

If you can get the data into an approximation of a table structure, (e.g get it into a standard set of columns and a bunch of rows) then the data can be saved as a CSV (A standard format to save in in any spreadsheet application I’ve seen) and it can be directly imported into a table on MS SQL using bulk insert. Bulk insert is command that will, among other things, directly import data from a file into a table.

Fixing a problem where the rain gets in

Here’s a test I ran:


create table inputTest
(
ID int,
FName varchar(2000),
LName varchar(2000),
age int
)

BULK INSERT dbo.inputTest
from 'C:\someFileName.csv'
WITH
(
FIELDTERMINATOR =',',
ROWTERMINATOR ='\n'
);

select * from inputTest

Where ‘someFileName.csv’ contains the data:


64,Joe,Blow,23
78,Alica,Aba,35

With the result being:

ID FName LName age
64 Joe Blow 23
78 Alica Aba 35

This is a very simple example but it shows the basic format and how to easily import a bunch of data from a file. The command has a whole host of other features, including the ability to specify an error file to throw bad rows that can’t be parsed into. Seems like the first step in turning a spreadsheet nightmare into something useful.