Cover

Table of Contents

Table of Contents

“Exercises of Statistical Inference”

INTRODUCTION

THEORETICAL OUTLINE

EXERCISES

“Exercises of Statistical Inference”

“Exercises of Statistical Inference”

SIMONE MALACRIDA

In this book, exercises are carried out regarding the following mathematical topics:

estimation theory

hypothesis testing and verification

linear regression

Initial theoretical hints are also presented to make the performance of the exercises understood.

Simone Malacrida (1977)

Engineer and writer, has worked on research, finance, energy policy and industrial plants.

ANALYTICAL INDEX

––––––––

INTRODUCTION

––––––––

I – THEORETICAL OUTLINE

Introduction

Estimation theory

Hypothesis testing

Regression

Bayesian inference

––––––––

II – EXERCISES

Exercise 1

Exercise 2

Exercise 3

Exercise 4

Exercise 5

Exercise 6

Exercise 7

Exercise 8

Exercise 9

Exercise 10

Exercise 11

Exercise 12

Exercise 13

Exercise 14

Exercise 15

Exercise 16

Exercise 17

Exercise 18

Exercise 19

Exercise 20

Exercise 21

Exercise 22

Exercise 23

Exercise 24

Exercise 25

Exercise 26

Exercise27

INTRODUCTION

INTRODUCTION

In this exercise book, some examples of calculations related to statistical inference are carried out.

Furthermore, the main theorems used both in estimation theory and in hypothesis testing are presented.

The study of statistics, in fact, does not stop at the properties of continuous and discrete probability distributions, but expands into inference sectors, applying the statistical concepts of estimation, mean, variance, regression and hypothesis testing when in the presence of particular tests.

In order to understand in more detail what is presented in the resolution of the exercises, the theoretical reference context is recalled in the first chapter.

What is presented in this workbook is generally addressed in advanced statistics courses at university level.

I

THEORETICAL OUTLINE

THEORETICAL OUTLINE

Introduction

––––––––

Statistical inference falls into two broad areas of interest: estimation theory and hypothesis testing.

At the basis of both areas is sampling understood as the choice of the sample of the statistical population: it can be random, probabilistic, reasoned or convenient.

The sampling methods depend on the probability distribution and on the random variables just described.

––––––––

Estimation theory

––––––––

The estimation theory allows to estimate parameters starting from measured data through a deterministic function called estimator.

There are various properties that characterize the quality of an estimator including correctness, consistency, efficiency, sufficiency, and completeness.

A correct estimator is a function that has an expected value equal to the quantity to be estimated, vice versa it is called biased.

The difference between the expected value of the estimator and that of the sample is called bias , if this difference is zero as the sample tends to infinity then the estimator is said to be asymptotically correct.

Given a random variable X of unknown parameter Y, an estimator T(X) is sufficient for Y if the conditional probability distribution of X given by T(X) does not depend on Y.

An estimator for the parameter Y is said to be weakly consistent if, as the sample size approaches infinity, it converges in probability to the value of Y.

If, on the other hand, it almost certainly converges, then it is said to be consistent in the strong sense.

A sufficient condition for weak consistency is that the estimator is asymptotically correct and that we have at the same time:

––––––––

We define Fisher information as the variance of the logarithmic derivative associated with a given likelihood function (we will define the concept of likelihood shortly).

This quantity is additive for independent random variables.

The Fisher information of a sufficient statistic is the same as that contained in the whole sample.

In the case of multivariate distributions we have:

The Cramer-Rao inequality states that the variance of an unbiased estimator is thus related to the Fisher information:

In the multivariate case it becomes:

The efficiency of an unbiased estimator is defined as follows:

It follows from the Cramer-Rao inequality that the efficiency for an unbiased estimator is less than or equal to 1.

An estimator is said to be efficient if its variance reaches the lower limit of the Cramer-Rao inequality and it is said to be asymptotically efficient if this value is reached as a limit.

The relative efficiency between two estimators is given by:

Impressum

Verlag: BookRix GmbH & Co. KG

Tag der Veröffentlichung: 23.04.2023
ISBN: 978-3-7554-4007-9

Alle Rechte vorbehalten

Nächste Seite
Seite 1 /