Friday, December 21, 2012

Accelerated Failure Time Models

I continued my experiments with survival analysis, this time with Accelerated Failure Time Models. Although the Cox-Snell residuals, as explained in this post, indicated that the Cox model was a pretty good match, it works on the Proportional Hazards assumption, which holds only approximately for our data, and I was interested in seeing if we can do away with this assumption. The Accelerated Failure Time Model stands on the assumption that the event times (in our case, actual times when kids exit from care) come from a theoretical distribution, like Weibull, or Log-logistic, exponential etc. The graphical test to check if the exit times follow Weibull distribution (I mean, apart from drawing a histogram of exit times with equal bucket widths, which I did, too) is to plot log(-log(S(t))) against log(t), where S(t) is the survival function estimated by Kaplan-Meier, and it should give a straight line (an explanation is here). What we got was roughly a straight line. The equivalent check for the Log-logistic distribution is to plot log((1-S(t))/S(t)) against log(t), which should be a straight line if the assumption holds, and the curve I got was almost quadratic, so I pursued Weibull. I used the survreg function from the survival library. For prediction using the AFT Weibull model, I used the generic predict function of R, computing the quantiles, and plotting them in a reversed way, using ideas as in this lecture.

However, when I did the graphical check with Cox-Snell residuals and their cumulative hazards, as I did for the Cox-model, it seemed that the Cox model was a better fit. The reason was the estimated survival probabilities of the children who exited, as computed on their actual exit times, did not follow a uniform distribution very well, and the graphical check with Cox-Snell residual stands on the assumption that it would.

However, when we plot (one plot per entry cohort year, which is 2008, 2009, 2010, 2011 or 2012) the three estimated survival functions by Kaplan-Meier, Cox regression and AFT regression (for the last two, we apply the model on an "average kid", i.e., a fictitious kid with all covariate values set to their respective means), the last two come pretty close, and both remain pretty close to the one given by Kaplan-Meier for the older entry cohorts; and farther from the one given by Kaplan-Meier for the more recent entry cohorts. This is actually the way we want it to be; the reason being, for the older cohorts, most (93-95%) of kids have exited care, but for the more recent cohorts, only very few have; so, what Kaplan-Meier gives is essentially a biased estimate, based mostly on the kids who have already left, and for those kids, the length of stay in care for 2012 entry cohort, as of today, can't be more than 350 days, and so on. However, the pattern seen from the older cohorts establishes our trust in the model, and we expect the Kaplan-Meier estimate and the Cox-AFT estimates will come closer to each other as more kids from the recent cohorts will exit from care and the model will get updated. 

Tuesday, December 4, 2012

Survival Analysis, Part II

I continued my work on survival analysis, this time on real Casebook data, as Casebook went live on July 5. In child welfare, finding the length of stay by entry cohort has always been hard, as caseworkers had to wait till all or most of the children who entered care in a given time period exited care, to get an idea of the aggregate length of stay. To enable caseworkers have an idea of how long kids who are currently in care are going to stay in care, we treated the exits from care as "events" in survival data, and used Cox regression to predict the length of stay. We used a bunch of covarites, like entry cohort (2008-2012), age at the time of entering care, gender, race, parents' alcohol and drug abuse problems, child's behavioral problems, child's disability, history of domestic violence, family structure etc.

Since Cox model returns a survival function, and not a single value of length of stay, even for a kid for whom we know the actual value of length of stay, computing the residual for the model is not as simple as computing (expected - observed), and had to be done rather indirectly. There are various residuals that focus on different aspects of a Cox model, and I was mostly interested in a residual that gives the overall goodness of fit of the model with respect to the observations. The Cox-Snell residual is one such residual. I got fascinated by the underlying idea: it is based on the concept of probability integral transform , which says that if X is a continuous random variable with cumulative distribution function F_X, then the random variable defined as Y = F_X(X) has a uniform distribution in [0, 1]. 

Now, for any continuous random variable X, then, since S(X) = Pr(X > x) = 1 - F_X(X), F_X(X) having a uniform distribution in [0, 1] implies S(X) too has a uniform distribution in [0, 1]. It can be shown that the cumulative hazard rate evaluated at the actual exit time, H(X), which is known to be equal to -ln[S(X)], then, follows an exponential distribution with rate 1. 

Now, the Cox-Snell residual is defined as the cumulative hazard rate of the actual survival time X, and hence, by what we discussed in the above paragraph, if the estimates of the coefficients are close to the actual values of the coefficients, then the Cox-Snell residual should have an exponential distribution with rate 1.

However, the Cox-Snell residual is itself also a continuous random variable, so, by the probability integral transform property and what else we discussed above, the cumulative hazard rate of the Cox-Snell residual should also follow an exponential distribution with rate 1.

Combining all we just said, if we plot the cumulative hazard rate of the Cox-Snell residual vs the Cox-Snell residual, then, the P-P plot should give rise to a scatterplot where the points lie approximately along the line y = x. In practice, the cumulative hazard rate of the Cox-Snell residual is evaluated by the Nelson-Aalen Estimator, and I found this piece of code pretty useful in doing all this. For a theoretical discussion, one should refer to Section 11.2 of Klein and Moeschberger.