Skip to contents

Using this function sample from big data under logistic regression when there are more than one model to describe the data. Subsampling probabilities are obtained based on the A- and L- optimality criteria.

Usage

modelRobustLogSub(r1,r2,Y,X,N,Alpha,All_Combinations,All_Covariates)

Arguments

r1

sample size for initial random sampling

r2

sample size for optimal sampling

Y

response data or Y

X

covariate data or X matrix that has all the covariates (first column is for the intercept)

N

size of the big data

Alpha

vector of alpha values that are used to obtain the model robust subsampling probabilities

All_Combinations

list of possible models that can describe the data

All_Covariates

all the covariates in the models

Value

The output of modelRobustLinSub gives a list of

Beta_Data estimated model parameters for each model in a list after subsampling

Utility_Data estimated Variance and Information of the model parameters after subsampling

Sample_L-optimality list of indexes for the initial and optimal samples obtained based on L-optimality criteria

Sample_L-optimality_MR list of indexes for the initial and model robust optimal samples obtained based on L-optimality criteria

Sample_A-optimality list of indexes for the initial and optimal samples obtained based on A-optimality criteria

Sample_A-optimality_MR list of indexes for the initial and model robust optimal samples obtained based on A-optimality criteria

Subsampling_Probability matrix of calculated subsampling probabilities for A- and L- optimality criteria

Details

Two stage subsampling algorithm for big data under logistic regression for multiple models that can describe the big data.

First stage is to obtain a random sample of size \(r_1\) and estimate the model parameters for all models. Using the estimated parameters subsampling probabilities are evaluated for A-, L-optimality criteria and model averaging A-, L-optimality subsampling methods.

Through the estimated subsampling probabilities a sample of size \(r_2 \ge r_1\) is obtained. Finally, the two samples are combined and the model parameters are estimated for all the models.

NOTE : If input parameters are not in given domain conditions necessary error messages will be provided to go further.

If \(r_2 \ge r_1\) is not satisfied then an error message will be produced.

If the big data \(X,Y\) has any missing values then an error message will be produced.

The big data size \(N\) is compared with the sizes of \(X,Y\) and if they are not aligned an error message will be produced.

If \(0 < \alpha < 1\) for the a priori probabilities are not satisfied an error message will be produced.

References

Mahendran A, Thompson H, McGree JM (2023). “A model robust subsampling approach for Generalised Linear Models in big data settings.” Statistical Papers, 64(4), 1137--1157.

Examples

Dist<-"Normal"; Dist_Par<-list(Mean=0,Variance=1)
No_Of_Var<-2; Beta<-c(-1,2,1,2); N<-10000
All_Models<-list(Real_Model=c("X0","X1","X2","X1^2"),
                 Assumed_Model_1=c("X0","X1","X2"),
                 Assumed_Model_2=c("X0","X1","X2","X2^2"),
                 Assumed_Model_3=c("X0","X1","X2","X1^2","X2^2"))
family = "logistic"

Full_Data<-GenModelRobustGLMdata(Dist,Dist_Par,No_Of_Var,Beta,N,All_Models,family)
#> Warning: glm.fit: fitted probabilities numerically 0 or 1 occurred
#> Warning: glm.fit: fitted probabilities numerically 0 or 1 occurred

r1<-300; r2<-rep(100*c(6,9,12),50); Original_Data<-Full_Data$Complete_Data;

modelRobustLogSub(r1 = r1, r2 = r2,
                  Y = as.matrix(Original_Data[,colnames(Original_Data) %in% c("Y")]),
                  X = as.matrix(Original_Data[,-1]),N = nrow(Original_Data),
                  Alpha = rep(1/length(All_Models),length(All_Models)),
                  All_Combinations = All_Models,
                  All_Covariates = colnames(Original_Data)[-1])->Results
#> Step 1 of the algorithm completed.
#> Step 2 of the algorithm completed.

Beta_Plots<-plot_Beta(Results)