Making CSS mud hills

I was recently asked to work on the UI and UX portion of our new product; colors, menu sizes, location, that sort of thing. This is all CSS work. The extent of my work with CSS up to that point was almost exclusively adding a few style tags to adjust the placement of elements on the webpage and such. I’m no stranger to baptism by fire since a lot of the skills I use on the job were thing I learned after being told to do them. *cough* JavaScript *cough* After messing around with it for a bit, I started to get the handle of the level of prescience of the various CSS items and how things overwrote each other and I began to find some of the interesting properties that can by styled in addition to things like media queries that can give some conditional styling.

I was, however, filled with the overwhelming since that I wasn’t doing it the ‘right’ way; That my style-sheets were going to be super sloppy and a complete mess to someone who knows what they are doing. I got that sense that I used to get when coding that I was making it work, but doing it wrong, and I had no idea what the right way was. I have seen some of the horror shows that sloppy coding can bring about, and I have the feeling my work may wind up as a CSS version of that. None-the-less, as always, it’s a new skill that I picked up and can do (if not super well) if there is a need. (Of course, you really really should hire an actual UX person when you are designing this stuff and not rely on programmers to do this. Programmers are not artists and artists are not programmers).

I have a good screenshot of the before and after that I will post shortly.

Who downloads the downloader?

The problem

I recently had a technical issue where I needed to access an application using an Android tablet (don’t ask, just roll with it). My first line of attack was to simply try to use my Fire tablet to access the application. Apparently the fire OS has been determined to not be Android enough to qualify as Android anymore so that was a bust. I don’t really have ready access to any other Android devices, and I wasn’t going to buy one simply to address a small technical problem.

Luckily, I did some dev work on an android app a while ago, so I happen to know that the Android Studio application comes with everything you need to easily run virtual Android devices for debugging purposes. In this particular case, a virtual device is fine, I just need the application to register as talking to an android tablet. After opening my old install of studio (and waiting for it to run through updates), I was in business and set up a virtual device to test with. It only took me a few minutes to realize that the virtual device did not have the Play Store app installed.

Into the quagmire

It seemed like even a virtual device meant for testing should at least be able to connect to the play store for more complex debugging then simply running smoke tests, but Play Store simply was not there. After briefly having an existential crisis over the idea of ‘how do you download the thing that lets you download other things?’, I figured that it should be possible to download the Play Store though a browser on the very off chance that someone managed to delete it from their phone. After playing around with the Play Store webpage through a browser, which the emulator thankfully does have, (and managing to somehow download an application onto my actual phone), I wasn’t really getting anywhere.

I figure that, as with most things in life, I had done something stupid and screwed it up, so I tried searching to see if there was a simple step to get the Play Store on an emulator that I had missed. I quickly realized that, although this seems to be a common problem, the emulator is simply not designed to support using the Play Store. There were various solutions that people posed, like loading the data into the emulator file, but they were all too complex for what I though would be a simple task.

Luckily, one helpful poster on Stack Overflow noted that the most recent versions of Android studio were updated to include having the Play Store on the emulator. I hit up the site for studio and, sure enough, that was in one of there most recent news postings. I noted that there version number associated with this change was higher then the version of studio that I had installed so I went back and checked for any updates. I had the latest version and there were no updates. After reading the news store more closely (reading is fundamental, kids), I saw that the build was not yet a stable build and was part of the beta builds they publish. After monkeying with the setting in studio a bit, I discovered a section for ‘updates’ that specifically noted which types of builds you want to use. Stable did not have the build I needed, and even beta did not have it, so I turned to the every sketchy and crash-prone Dev build.

After going through the song and dance of manually grabbing the zip and extracting it (since the dev branch is meant to run separately from a regular studio build since it is crash prone), I was back in business and set up another virtual device. After several minutes of waiting for the emulation slowed device to boot, I looked around and found… that the Play Store still wasn’t there. After some more searches, I found out that only some of the devices profiles on the emulator come with the play store installed (noted by a helpful icon in a column labeled ‘Play Store’). I looked at the list of profiles and as luck would have it… not a single tablet profile has the Play Store. At the moment it seems to be limited to Android Wear and the Nexus 5 phone profiles.

In the end…

The upshot of this whole experience is that Android Studio will likely soon have a stable build that can create virtual devices with the Play Store, but it’s still under development right now and it seems to be fairly limited in the number of virtual devices it does this for at the moment. This makes my ‘simple’ technical problem a headache, but at least there will be an easy solution available long after I no longer need it.

Bulking Up

The problem

A common problem I’ve run across is that people will have these elaborate spreadsheets in Excel that contain all of their data that they need to run things. Excel is the poor man’s database, but people prefer its interface over the challenge of having to learn some SQL. (Let’s ignore the fact that learning to manipulate Excel can take just as long if not longer then learning some SQL). I’ve spent some time thinking about various ways that I could import spreadsheet data into a DB and show people how easy it is to switch and do basic manipulation to get them interested. Although I have yet to figure out how to import calculations and elaborate cross-spreadsheet connections, I have figure out import a lot of basic Excel data into a SQL server.

If you can get the data into an approximation of a table structure, (e.g get it into a standard set of columns and a bunch of rows) then the data can be saved as a CSV (A standard format to save in in any spreadsheet application I’ve seen) and it can be directly imported into a table on MS SQL using bulk insert. Bulk insert is command that will, among other things, directly import data from a file into a table.

Fixing a problem where the rain gets in

Here’s a test I ran:


create table inputTest
(
ID int,
FName varchar(2000),
LName varchar(2000),
age int
)

BULK INSERT dbo.inputTest
from 'C:\someFileName.csv'
WITH
(
FIELDTERMINATOR =',',
ROWTERMINATOR ='\n'
);

select * from inputTest

Where ‘someFileName.csv’ contains the data:


64,Joe,Blow,23
78,Alica,Aba,35

With the result being:

ID FName LName age
64 Joe Blow 23
78 Alica Aba 35

This is a very simple example but it shows the basic format and how to easily import a bunch of data from a file. The command has a whole host of other features, including the ability to specify an error file to throw bad rows that can’t be parsed into. Seems like the first step in turning a spreadsheet nightmare into something useful.

Eight days a week

I was recently tasked with reworking a system responsible for something I loath to do in SQL: dates. It always seemed like there was a huge hole in terms of built-in functionality for dates on SQL server. It took until 2012 to get DATEFROMPARTS. How is that not like, the second function you write for the date class?

The problem


The system I was reworking is responsible for creating a a series rows to represents a continuous sequence of weeks between date A and date B. The returned rows must be whole weeks, even if the week extends outside of the date range. (So, no half weeks if the week started before Date A). The start/end of the week is configurable for different clients. In addition, the weeks are associated with a month so that data can be summarize by weeks or ‘months’ as defined by those weeks. Where this gets interesting is on weeks that span two months. In that situation there were 4 possible options that we had that are configurable to client preference.

  • Associate the week with whichever month it has more days in
  • Associate the week with the month it starts/ends in
  • Associate the week with whichever month a certain day of the week is in
  • Break the weeks into two separate records for each month

Someone set us up the data


The solution I came up with is fairly straight forward but uses some decently clever math to avoid most of the actual finagling with the date class.

First, the creation of a temporary variable to hold the records and setting up the date records. I pull every date between date A and date B that is on the day of the week that is configured in the system. It happens to denote the day of the week of the end of the week, so I subtract 7 days from that date to get the corresponding beginning of the week. This will give records that represent a continuous series of weeks.


declare @EndOfWeekDayOfWeek as int, @DateA as Date, @DateB as Date; --This will be set by parameters or configurations

declare @WeekRecords table
(
WeekStartDate date,
WeekEndDate date,
AssociatedMonth int,
AssociatedYear int
)

with [Days] as (
select @DateA as [date], 1 as DayNum
union all
select DateAdd(day, 1, [date]) as [Date], DayNum + 1 as DayNum
from [Days]
where [Date] < DATEADD(day, 7, @DateB) --We overshoot the date range a bit because dateB might be during in the middle of the week so we have to pull an additional prior week ) insert into @WeekRecords select dateadd(day, -6, [date]), [date] from [Days] where Datepart(wk, [date]) = @EndOfWeekDayOfWeek option (MAXRECURSION 1000)

Next, I update the dates to associate them with months.

For the configuration to simply associate the week with the month it beginnings or ends in, this is straight forward. Just grab the month and year number from the start or end date.

--Beginning of week
update @WeekRecords set AssociatedMonth = month(WeekStartDate), AssociatedYear = year(WeekStartDate)

--end of week
update @WeekRecords set AssociatedMonth = month(WeekEndDate), AssociatedYear = year(WeekEndDate)

For the configuration to divide the weeks into two weeks, one for each month, it is also fairly straight forward. First, find all the weeks that start and end in different months, e.g. month(WeekStartDate) <> month(WeekEndDate), and add new records to represent the week that ranges from the 1st of the month to the end date of the original record. Next, update all the records where the months still don't match (which will exclude the new records) to end on the end of the month they start in (which there is a convenient function for) and at the same time, update everything to associate it with the month it starts in (which is guaranteed to be the correct month now that we added the new set of records that start on the first of the next month).


insert into @WeekRecords (WeekStartDate, WeekEndDate, AssociatedMonth, AssociatedYear)
select datefromparts(year(WeekEndDate), month(WeekEndDate), 1), WeekEndDate, month(WeekEndDate), year(WeekEndDate)
from @WeekRecords where month(WeekStartDate) <> month(WeekEndDate)

update @WeekRecords set WeekEndDate = case when month(WeekStartDate) <> month(WeekEndDate) then EOMONTH(WeekStartDate) else WeekEndDate end, AssociatedMonth = month(WeekStartDate),
AssociatedYear = year(WeekStartDate)

For the configuration where we associate the week with the month the week has more days in, we can do something clever with the day of the month. For the weeks that startle a month, the day of the month of the ending date is the number of days that the week has in the ending month. If this is 3 or less then the week has 4 or more days in the starting month and is associate with that month, otherwise the opposite is true.

update @WeekRecords set AssociatedMonth = case when day(WeekEndDate) > 3 then month(WeekEndDate) else month(WeekStartDate) end,
AssociatedYear = case when day(WeekEndDate) > 3 then year(WeekEndDate) else year(WeekStartDate) end

The case where we associate the week with the month that a specific day of the week is in is the most complex, but can be done efficiently by using some clever math. To account for the fact that the week could be split so the later day of the week is before an earlier day of the week, like if the week ends on Tuesday and we are looking at Thursday which is prior to Tuesday, I subtract the target day of the week from the end day of the week. This gives us a range that uniformly increase up to the value of the end of week day.

We can get the day of the week that the first of the month was on by taking a known day of the week, in this case the end of our week's day, and subtracting the day number of the month from it, thus counting back the days of the week to the day of the week of the first. Once I have that, I know that if the target day of the week was before that date then the target day of the week happened before the first and was thus in the starting month. Both day of the week values, the target and the calculated value, will both be adjusted to extend from a possible negative value representing the prior calendar week days of the week to the end of the week day of the week.


Declare @AdjustTargetDayOfWeek as int = case when @EndOfWeekDayOfWeek < @useClinicalDay then @EndOfWeekDayOfWeek - @useClinicalDay + 1 else @useClinicalDay end update @WeekRecords set AssociatedMonth = case when month(WeekStartDate) <> month(WeekEndDate) and (@EndOfWeekDayOfWeek - (day(WeekEndDate) - 1) > @AdjustTargetDayOfWeek) then month(WeekStartDate) else month(WeekEndDate) end,
AssociatedYear = case when month(WeekStartDate) <> month(WeekEndDate) and (@EndOfWeekDayOfWeek - (day(WeekEndDate) - 1) > @AdjustTargetDayOfWeek) then year(WeekStartDate) else year(WeekEndDate) end

Finally, we have to trim out weeks that we potentially created that happen entirely before Date A or after Date B.


delete from @WeekRecords where WeekEndDate < @DateA or WeekStartDate > @DateB

Here's the entire code block:


declare @EndOfWeekDayOfWeek as int, @DateA as Date, @DateB as Date; --This will be set by parameters or configurations

declare @WeekRecords table
(
WeekStartDate date,
WeekEndDate date,
AssociatedMonth int,
AssociatedYear int
)

with [Days] as (
select @DateA as [date], 1 as DayNum
union all
select DateAdd(day, 1, [date]) as [Date], DayNum + 1 as DayNum
from [Days]
where [Date] < DATEADD(day, 7, @DateB) --We overshoot the date range a bit because dateB might be during in the middle of the week so we have to pull an additional prior week ) insert into @WeekRecords select dateadd(day, -6, [date]), [date] from [Days] where Datepart(wk, [date]) = @EndOfWeekDayOfWeek option (MAXRECURSION 1000) if @ConfigSetting = 'Split Week' begin insert into @WeekRecords (WeekStartDate, WeekEndDate, AssociatedMonth, AssociatedYear) select datefromparts(year(WeekEndDate), month(WeekEndDate), 1), WeekEndDate, month(WeekEndDate), year(WeekEndDate) from @WeekRecords where month(WeekStartDate) <> month(WeekEndDate)

update @WeekRecords set WeekEndDate = case when month(WeekStartDate) <> month(WeekEndDate) then EOMONTH(WeekStartDate) else WeekEndDate end, AssociatedMonth = month(WeekStartDate), AssociatedYear = year(WeekStartDate)
end
else if @ConfigSetting = 'Start Month'
begin
update @WeekRecords set AssociatedMonth = month(WeekStartDate), AssociatedYear = year(WeekStartDate)
end
else if @ConfigSetting = 'End Month'
begin
update @WeekRecords set AssociatedMonth = month(WeekEndDate), AssociatedYear = year(WeekEndDate)
end
else if @ConfigSetting = 'Target Day of Week'
begin
Declare @AdjustTargetDayOfWeek as int = case when @EndOfWeekDayOfWeek < @useClinicalDay then @EndOfWeekDayOfWeek - @useClinicalDay + 1 else @useClinicalDay end update @WeekRecords set AssociatedMonth = case when month(WeekStartDate) <> month(WeekEndDate) and (@EndOfWeekDayOfWeek - (day(WeekEndDate) - 1) > @AdjustTargetDayOfWeek) then month(WeekStartDate) else month(WeekEndDate) end,
AssociatedYear = case when month(WeekStartDate) <> month(WeekEndDate) and (@EndOfWeekDayOfWeek - (day(WeekEndDate) - 1) > @AdjustTargetDayOfWeek) then year(WeekStartDate) else year(WeekEndDate) end
end
else
begin
update @WeekRecords set AssociatedMonth = case when day(WeekEndDate) > 3 then month(WeekEndDate) else month(WeekStartDate) end,
AssociatedYear = case when day(WeekEndDate) > 3 then year(WeekEndDate) else year(WeekStartDate) end
end

delete from @WeekRecords where WeekEndDate < @DateA or WeekStartDate > @DateB --Trim the excess weeks

select WeekStartDate, WeekEndDate, AssociatedMonth, AssociatedYear
from @WeekRecords
order by WeekStartDate desc

The Long and Winding Road

Okay, so here’s a real rabbit hole of a journey trying to solve a security hole on SQL server. I got through new and interesting topics, but I inevitably found out that no, it can’t be done. (Isn’t that always the way?)

The Problem

In the current version of our product, we have multiple databases that house related but conceptually distinct information on the same instance of SQL server. We also extract information from remote client servers for aggregation using Linked Server objects. IP information varies from client to client, so dynamic SQL must be used to define the remote host in the query.

We have a tool that runs all of our scheduled jobs through OSQL rather then using SQL Server agent. At regular intervals during the day, we need to have the job runner log in with a service account and run a stored procedure that queries across several of our databases and to client databases. We want to tightly control security on this account since it will be accessing a database with sensitive information and will be talking with client databases, which also have sensitive information.

Small beginnings

My initial plans for implementing improved security were fairly straight forward, we encapsulate all of the actual logic into Stored Procedures that only DBA’s have permission to alter and we give our service account exec permission on a single databases that contains these stored procedures. In this way, our service account does not actually have permission to read sensitive data (or any data whatsoever) and it can still run jobs.

The problem here is that we need to use Dynamic SQL since the actual client server we will be querying is not know until run time (where the name is pulled from a table). Dynamic SQL
executes as if the user calling the procedure execute the query itself. As noted in the documentation. So we would still have to give the account permission to see read everything in order to run the Stored Procedure. Bummer.

However, this pointed me to my next possible solution.

I’m not me, I’m that guy over there!

Impersonation can be used to allow a user execute something that he does not normally have permission to execute. In particular, Stored Procedures can be written using an ‘Execute as’ clause.

CREATE PROCEDURE Foo
--Arbitrary parameters here

WITH EXECUTE AS 'SomeProxyUser'
AS
--Arbitrary code here

Reference

This causes the SP to run as if it was called by the specified user account rather then the actual caller. Cool, now I can set up a secure proxy account and write the SP to use that account rather than the calling account. Once again, the service account will only have exec permission and will not be able to read any data. Exactly what we need, right?

Well, no. As it turns out, our proxy user is just that, a user and users only exist on a single database. A login can be mapped to users on different databases but the user accounts themselves only exist (and have permission on) a single databases. Since we are querying across multiple databases, no user can be set up with permission to access data in other databases.

After that harsh lesson in user accounts on SQL server, I found another possible solution using signed Stored Procedures.

Signed with a kiss… and 256 bits encryption

A stored procedure can be signed with a certificate, and then a user can be created form that same certificate. This certificate user can be granted the necessary permissions to run the SP. When someone run’s the SP, the permissions of the certificate user are added to there own during the execution. See here.

What makes this different is that, the certificate can be backup and moved to a different database and then a new user on that database can be created with that same certificate. This user can be grant AUTHENTICATE SERVER permission at which point it can authenticate for both users by way of the certificate verifying it to both databases.Documentation here.Blog post here.

The upshot is that our service account can be granted nothing but execute permission and access the data on both databases through a certificate. The SPs need to be resigned every time they are updated, but its a small price to pay for a secure database.

As you might have guessed, this too has a problem.

Et tu, linked server?

Linked server objects have a security context. The options boil down to creating mapping for logins on the server (specifying what login they use on the remote server) and defining what to do for anyone who is not one of these logins. The options are: don’t allow a connection, make a connection with no security context, use the user’s current security context (AD token or SQL username and password), or use a specified separate login.

We want to only grant permissions to our signed user so that we avoid letting the service account access remote servers, but our signed user does not have a login, it only exists as database user(s). As a result, it cannot be mapped in the linked server object. It also has no security context of its own. The only viable option would be to set the linked server to use a specified login for all users not on the mapped list which should pick up our certificate user. However, this opens the door for any account to get access to the remote server since any account would now be given that mapping for the remote server.

In short, if we’re going to use a linked server object, we will have to use something with an actual login.

Back to impersonation

Previously, my impersonation attempt was simply impersonating a database user. However, it is possible to impersonate a server login. See here. This should allow me to solve the problem by adding a line in the stored procedure that causes the service account to impersonate another login that has the necessary permissions while not granting the account direct permission on the database.

As always… no, not quiet. In order to impersonate a login, an account must be granted the impersonate permission on that login. Since the impersonated login has access to read data, and our service account has permission to impersonate that login, there is nothing stopping our service account from reading sensitive data through that account aside from ignorance of the impersonation permission. Security through obscurity is simply bad practice, so this solution isn’t going to work.

Where do we go from here?

So far all of my approaches have failed due to the complexity of our needs. So, what now?

I have looked into alternate ways of access remote data that avoid linked server objects. This would allow us to use a certificate user approach. The solution I see now is using OPENROWSET, which does not required a linked server and simply allows the call to directly state the login credentials, to make remote calls. Documentation. This, however, would require us to rework how we do things since we’d now have to store our remote credentials differently. We’d also have to look into encryption and security on those stored passwords.

In the end, I’m still looking for an ideal solution but, I think we may wind up simply having to do things differently or to have slightly looser security then I’m looking for. I’m not sure, but I do know it was quiet the ride getting even this far.

We’ve re-branded, it tests well with focus groups

So, apparently when you’d like to look like an intelligent developer who knows stuff, you’re supposed to start a technical blog where you write about all the developer stuff that you deal with in your life. That sounds like a very millennial thing to do, but who am I to question to the infinite wisdom of ‘they who say stuff’. I happen to have a website that I am paying for and not using, so why not try to look like someone who knows what’s what.

Screw you gravity, I’m Spider-man!

I’ve been working on a wall walking system in UDK and have it mostly finished. The wall movement controls aren’t quiet what I want them to be but I fear I have not the knowledge of angles to make it work (they’re good but not perfect). I have to do roof walking which is a bit more interesting since a roof or floor does not allow for an up orientation when clinging.

A great many days later…

I’ve uploaded a new version of Mazes. This version has darkness added (and prompts the user for if he/she wants darkness and how far the user can see), and also has a randomized beginning and ending location. Now I’ve gone from a simple algorithm test, to a simple game, to a game that might actually pass as a crummy mini-game. In another few years maybe it’ll reach the quality of a mediocre quality game!

It is pitch black, you are likely to be eaten by a grue.

I’ve added some basic ‘Darkness’ to my maze program by making it so only things within a certain radius draw. It’s caused an issue with how the walls are drawn since each wall relies on the next one to add the intersection character and so on, but it looks fine and 9it bugs me more then it actual is a problem. it also has made a big mess of my code but what’s new. I’m going to add some prompts for variables like sight range and weather or not to include sight and probably clean the code and then re-upload it.