Thursday, December 19, 2013

Feature design for classification of network traffic

For last few days, I have been working on designing a classification system that can separate bot-infected sessions from benign ones. Our training data came in the form of two tables: one on the sessions along with bot/user labels, and the other was on the HTTP requests that were made during the sessions. I created two tables in PostgreSQL with some common columns (client IP, server IP, session ID) to join the two tables. Some of the HTTP requests were for the actual pages; whereas many more of them were for the objects embedded within those pages (e.g., gif files). We identified which HTTP requests were for pages with some heuristics. Since I was free to choose the features for building the classifier, I did some initial exploratory analysis, which revealed the important difference between the two types of traffic:

1) Median number of requests from user sessions for pages was much less than that for bots. This was probably because users stay on pages and read/view the actual contents, whereas bots just ask for the pages by automated scripts.

2) Median number of requests from user sessions for distinct pages was also much less than that for bots.

3) Median duration of user session was also much less than that for bots, probably because real users leave when they are done with the content, bots stay to do the damage.

We created three features based on these three metrics. Additionally, since we had data on the sequence of page URLs visited in a session, we extracted 2-grams out of those sequences, and for each session and each 2-gram, kept a flag to indicate whether that 2-gram of page URLs appeared in that session or not (i.e., whether the user/bot in that session visited those 2 URLs in that order). The 2-grams are featured that captured the sequential nature of the data. Since each 2-gram thus became a feature on its own, this made the total number of features more than 5,500. Since most 2-grams do not occur in most sessions, this gave rise to a very sparse high-dimensional matrix, as we often see in text mining.

We computed information gain of these 2-gram based (binary) features, and ordered them by descending order of information gain, so that we can select as many top features as we want to build the model. More on that to follow...

Sunday, September 15, 2013

Completed Linear Algebra course

I completed the Linear Algebra course by
Dr. Philip Klein this summer, and here
is my certificate. It was a rigorous one where I revised important concepts of Linear Algebra and implemented related algorithms in Python.

Monday, August 19, 2013

Joined Impetus

I joined Impetus Technologies as a Data Scientist in New York. Looking forward to working in challenging analytics projects with a more diverse mix of technologies...

Wednesday, April 3, 2013

Low-dimensional visualization of Casebook data

I recently started exploring biplot approximation for Casebook data, as some aspects of Casebook data makes it inherently high-dimensional (e.g., we collect a bunch of data on substance abuse history of child and parent, whether caretaker was unable to cope, whether child has disability through quizzes when a "removal episode" for a child starts), and hence, comparing entry cohorts, and even children, in terms of these attributes is interesting. I tried two types of biplots so far: 1) biplot based on PCA and 2) correspondence analysis. Both the techniques are based on SVD, but the situations where these two are applied are different: PCA-based biplot is used mostly when we have observations about attributes which are continuous in nature, e.g., for each entry cohort from 2008 to 2012, I took the percentage of removal episodes with male children, and percentage of removal episodes where the children were reported to have problems like substance abuse, disability etc. So my "observations" were the entry cohorts, and my "attributes" were these percentages. The nice thing about PCA-based biplot is that it plots the "observations" and the "attributes" on the same 2D plot, which reveals how close or far the observations are from each other, how close or far the attributes are from each other, and how the attributes are related to the observations. This paper by Gabriel laid the foundation of PCA-based biplot. In summary, the PCA-based biplot approximation takes the projection of each observation and each attribute along the first two principal components, and uses them as the coordinates to plot them on a 2D plot.  Some of the interesting findings from our data using PCA-based biplot were:  child's drug abuse, child's alcohol abuse and physical abuse were closely related; parent's alcohol abuse, caretaker's inability to cope, inadequate housing for the child and relinquishment were once again very close; and so were child's behavioral problem and incarceration of parent. The first principal component explained 49% of the variation in the data, while the second explained 32%; so 81% of the total variance was explained by the first two. Each principal component was a linear combination of 16 variables, and the total variance got explained by 5 principal components.

Correspondence analysis, on the other hand, is used to see how two (or more) categorical covariates are related to each other. In that sense, it is very related to the chi-square test of independence. I found this document a fairly easy-to-understand, and very intuitive explanation of CA; while this is a more mathematical one. I also liked the way this article compares the inertia of a row in a contingency table with the physical concept of angular inertia, and especially the fact that it makes the following point

"Correspondence analysis provides a means of representing a table of  distances in a graphical form, with rows represented by points, so that the distances between points approximate the  distances between the rows they represent.".

 In our case, the permanency outcome of a child can be adoption, guardianship, reunification (the more desirable ones); or, transfer to another placement, transfer to a collaborative care, or the kid's running away resulting in a dismissal of wardship (the less desirable ones). On the other hand, the type of the provider for a removal episode can be a placement provider, a residential resource, a foster family, or even a person. We created the contingency table with the permanency outcomes on the rows and the provider types as the columns, and applied correspondence analysis on that. The important structure that CA revealed was that placement in a foster family leads more often to adoption, guardianship and to some extent, reunification - in summary, to the more desirable outcomes, whereas placement with a placement provider or a residential resource leads more often to emancipation (aging out), or transfer to another agency, or the child being placed in a collaborative care. The first two eigenvalues were 0.065 and 0.0145, and the total inertia (which can be shown to be the chi-square statistic for the contingency table divided by the sum of all cell values) was 0.084, so the first two dimensions explained 0.065/0.084 = 78% and 0.0145/0.084 = 17% of the inertia, respectively. Among the rows, the biggest contributors to the inertia of 0.084 were adoption (26%), transfer to another agency (25%) and guardianship (22%); and among the columns, the biggest contributors were residential resource (40.6%), placement provider (26%) and foster family (24%).

Wednesday, January 30, 2013

Numeric column functions in VoltDB 3.0

I made my first contribution to the in-memory database VoltDB, the product of the database startup of the same name, founded by the database pioneer Dr. Michael Stonebraker. The feature was the set of numeric column functions like ABS, FLOOR, CEILING, SQRT, EXP and POWER. The feature was a part of VoltDB 3.0, and I got a mention in Twitter by VoltDB.