Seminars

2019-04-04 - Daniel Apley

Presenter:

Daniel Apley

Title:

Affiliation:

Northwestern University

Date:

April 4, 2019

Abstract:

Website:

Dr. Apley's Website

2019-03-28 - Jeff Miller

Presenter:

Dr. Jeff Miller

Title:

Affiliation:

Harvard

Date:

March 28, 2019

Abstract:

Website:

Dr. Miller's Website

2019-03-21 - Yue Zhang

Presenter:

Dr. Yue Zhang

Title:

Affiliation:

University of Utah

Date:

March 21, 2019

Abstract:

Website:

Dr. Zhang's Webpage

2019-03-14 - Dennis Tolley

Presenter:

Dr. Dennis Tolley

Title:

Affiliation:

BYU

Date:

March 14, 2019

Abstract:

Website:

Dr. Tolley's Website

2019-03-07 - Grant Schultz

Presenter:

Dr. Grant Schultz

Title:

Affiliation:

BYU

Date:

March 7, 2019

Abstract:

Website:

Dr. Schultz Webpage

2019-02-28 - Ephraim Hanks

Presenter:

Dr. Ephraim Hanks

Title:

Affiliation:

Penn State

Date:

February 28, 2019

Abstract:

Website:

Dr. Hanks' Website

2019-02-21 - Michele Guindani

Presenter:

Michele Guindani

Title:

Affiliation:

University of California, Irvine

Date:

February 21, 2019

Abstract:

Website:

Dr. Guidani's Website

2019-02-14 - Garritt Page

Presenter:

Dr. Garritt Page

Title:

Affiliation:

BYU

Date:

February 14, 2019

Abstract:

Website:

Dr. Page's Website

2019-02-07 - Gil Fellingham

Presenter:

Dr. Gil Fellingham

Title:

Affiliation:

BYU

Date:

February 7, 2019

Abstract:

Website:

Dr. Fellingham's Webpage

2019-01-31 - Matthias Katzfuss - Gaussian-Process Approximations for Big Data

Presenter:

Matthias Katzfuss

Title:

Gaussian-Process Approximations for Big Data

Affiliation:

Texas A&M University

Date:

Abstract:

Gaussian processes (GPs) are popular, flexible, and interpretable probabilistic models for functions. GPs are well suited for big data in areas such as machine learning, regression, and geospatial analysis. However, direct application of GPs is computationally infeasible for large datasets. We consider a framework for fast GP inference based on the so-called Vecchia approximation. Our framework contains many popular existing GP approximations as special cases. Representing the models by directed acyclic graphs, we determine the sparsity of the matrices necessary for inference, which leads to new insights regarding the computational properties. Based on these results, we propose novel Vecchia approaches for noisy, non-Gaussian, and massive data. We provide theoretical results, conduct numerical comparisons, and apply the methods to satellite data.

Website:

Dr. Katzfuss Website

2019-01-24 - Brennan Bean - Interval-Valued Kriging with Applications in Design Ground Snow Load Prediction

Presenter:

Brennan Bean

Title:

Interval-Valued Kriging with Applications in Design Ground Snow Load Prediction

Affiliation:

Utah State University

Date:

January 24, 2019

Abstract:

The load induced by snow on the roof of a structure is a serious design consideration in many western and northeastern states: under-estimating loads can lead to structure failure while over-estimating loads unnecessarily increases construction costs. Recent updates to the design ground snow load requirements in Utah use geostatistics models to produce design ground snow load estimates that have shown significantly improved accuracy. However, the model inputs are subject to several sources of uncertainty including measurement limitations, short observation periods, and shortcomings in the distribution fitting process, among others. Ignoring these uncertainties in the modeling process could result in critical information loss that robs the final predictions of proper context. One way to account for these uncertainties is to express the data by intervals, as opposed to single numbers. Interval-valued geostatistics models for uncertainty characterization were originally considered and studied in the late 1980s. However, those models suffer from several fundamental problems that limit their application. This presentation proposes to modify and improve the interval-valued kriging models proposed by Diamond (1989) based on recent developments of random set theory. The resulting new models are shown to have more structured formulation and computational feasibility. A numerical implementation of these models is developed based on a modified Newton-Raphson algorithm and its finite sample performance is demonstrated through a simulation study. These models are applied to the Utah snow load dataset and produce an interval-valued version of the 2018 Utah Snow Load Study. The interesting and promising implications of these new results to design ground snow load and structural risk analysis will be thoroughly discussed.

Website:

Brennan's Webpage

2019-01-17 - Ron Reeder - Improving outcomes after pediatric cardiac arrest – a hybrid stepped-wedge trial

Presenter:

Ron Reeder

Title:

Improving outcomes after pediatric cardiac arrest – a hybrid stepped-wedge trial

Affiliation:

University of Utah

Date:

January 17, 2019

Abstract:

Quality of cardiopulmonary resuscitation (CPR) is associated with survival, but recommended guidelines are often not met, and less than half the children with an in-hospital arrest will survive to discharge. A single-center before-and-after study demonstrated that outcomes may be improved with a novel training program in which all pediatric intensive care unit staff are encouraged to participate in frequent CPR refresher training and regular, structured resuscitation debriefings focused on patient-centric physiology.

I’ll present the design of an ongoing trial that will assess whether a program of structured debriefings and point-of-care bedside practice that emphasizes physiologic resuscitation targets improves the rate of survival to hospital discharge with favorable neurologic outcome in children receiving CPR in the intensive care unit. This study is designed as a hybrid stepped-wedge trial in which two of ten participating hospitals are randomly assigned to enroll in the intervention group and two are assigned to enroll in the control group for the duration of the trial. The remaining six hospitals enroll initially in the control group but will transition to enrolling in the intervention group at randomly assigned staggered times during the enrollment period.

This trial is the first implementation of a hybrid stepped-wedge design. It was chosen over a traditional stepped-wedge design because the resulting improvement in statistical power reduces the required enrollment by 9 months (14%). However, this design comes with additional challenges, including logistics of implementing an intervention prior to the start of enrollment. Nevertheless, if results from the single-center pilot are confirmed in this trial, it will have a profound effect on CPR training and quality improvement initiatives.

Website:

Dr. Reeder's Website

2019-01-10 - Juan Rodriguez - Deep Learning to Save Humanity

Presenter:

Juan Rodriguez

Title:

Deep Learning to Save Humanity

Affiliation:

Recursion Pharmaceuticals

Date:

January 10, 2019

Abstract:

During the last 50 years, the advances in computational processing and storage have overshadowed the progress of most areas of research. At Recursion Pharmaceuticals we are translating these advances into biological results to change the way drug discovery is done. We are hyper-parallelizing the scientific method to discover new treatments for patients. This new approach presents unique statistical and mathematical challenges in the area of artificial intelligence and computer vision which will be presented.

Website:

Company Website

2018-12-06 - Dennis Eggett - Making the best of messy data: A return to basics

Presenter:

Dr. Dennis Eggett

Title:

Making the best of messy data: A return to basics.

Affiliation:

BYU

Date:

December 6, 2018

Abstract:

When your data does not meet the basic assumptions of an analysis method, you have to go back to the basics in order to glean the information you need. Three data sets will be used to explore resampling methods based on the definition of a p-value and the central limit theorem. A simple two sample t-test of a data set that is not near normal and does not conform to non-parametric methods is used to demonstrate resampling in its simplest form. A mixed model analysis of highly skewed data will be used to demonstrate how to maintain its structure through the resampling process. And a resampling of a very large data set to demonstrate the finding parameter estimates and confidence intervals.

Website:

Dr. Eggett's Webpage

2018-11-29 - Bruno Sanso - Multi-Scale Models for Large Non-Stationary Spatial Datasets

Presenter:

Bruno Sanso

Title:

Multi-Scale Models for Large Non-Stationary Spatial Datasets

Affiliation:

University of California Santa Cruz

Date:

November 29, 2018

Abstract:

Large spatial datasets often exhibit features that vary at different scales as well as at different locations. To model random fields whose variability changes at differing scales we use multiscale kernel convolution models. These models rely on nested grids of knots at different resolutions. Thus, lower order terms capture large scale features, while high order terms capture small scale ones. In this talk we consider two approaches to fitting multi-resolution models with space-varying characteristics. In the first approach, to accommodate the space-varying nature of the variability, we consider priors for the coefficients of the kernel expansion that are structured to provide increasing shrinkage as the resolution grows. Moreover, a tree shrinkage prior auto-tunes the degree of resolution necessary to model a subregion in the domain. In addition, compactly supported kernel functions allow local updating of the model parameters which achieves massive scalability by suitable parallelization. As an alternative, we develop an approach that relies on knot selection, rather than shrinkage, to achieve parsimony, and discuss how this induces a field with spatially varying resolution. We extend shotgun stochastic search to the multi resolution model setting, and demonstrate that this method is computationally competitive and produces excellent fit to both synthetic and real dataset.

Website:

Dr. Sanso's Website

2018-11-15 - Margie Rosenberg - Unsupervised Clustering Techniques using all Categorical Variables

Presenter:

Margie Rosenberg

Title:

Unsupervised Clustering Techniques using all Categorical Variables

Affiliation:

University of Wisconsin-Madison

Date:

November 15, 2018

Abstract:

We present a case study to illustrate a novel way of clustering individuals to create groups of similar individuals where covariates are all categorical. Our method is especially useful when applied to multi-level categorical data where there is no inherent order in the variable, like race. We use data from the National Health Interview Survey (NHIS) to form the clusters and apply these clusters for prediction purposes to the Medical Expenditures Panel Study (MEPS). Our approach considers the person-weighting of the surveys to produce clusters and estimates of expenditures per cluster that are representative of the US adult civilian non-institutionalized population. For our clustering method, we apply the K-Medoids approach with an adapted version of the Goodall dissimilarity index. We validate our approach on independent NHIS/MEPS data from a different panel. Our results indicate the robustness of the clusters across years and indicate the ability to distinguish clusters for the predictability of expenditures.

Website:

Dr. Rosenberg's Website

Pages