History of statistics is alive to me as I fondly recall my interaction with Eric Lehmann since receiving my Ph. D. in Berkeley in 1953. The 2003 Economics Nobel Prize (awarded for fundamental research in statistical time series analysis) reminds me of my joking complaint to diverse applied researchers: "why do you call it theory if I know it and applied research when you practice it?" I have continued to learn a lot about Quantiles and Nonparametric Data Modeling since my 1979 JASA paper. New methods have been developed (that some applied researchers consider a gold mine). Quantile data modeling is not practiced by most statisticians who are limited to sample median Q2, interquartile range IQR, and Q-Q probability plots. To estimate and test a parameter mu one starts with natural estimator mu^ ; we define statistic T(mu,mu^) increasing function of mu and with distribution (when mu is true parameter) equal to distribution of a random variable T (usually Normal(0,1), Student, or inverse average

chi-square). To test H_0:mu=mu_0 one computes or bounds P value(mu_0)=F_T(observed T(mu_0,mu^)); it is a distribution function of mu_0 (whose probability density one could derive). Define its inverse m^(u) by Q_T(u)=T(mu^(u),mu^); mu^(u), called the parameter with P-value u, is a quantile function which has a pseudo-Bayesian interpretation as the conditional quantile of mu given the data. The conventional confidence level 1-a confidence interval can

be shown to be mu^(a/2)
define (and plot on same graph with exponential and normal) informative

quantile/quartile function Q/Q(u)=(Q(u)-midquartile)/2 IQR. Talk could also discuss confidence Q-Q plots, conditional quantile, comparison distribution, mid-distribution, and definition of sample quantiles, linear rank statistics, and sample variance.