There are lots of uncertainties in our life: monthly revenues of a business, the number of vehicles left in a parking lot everyday, the number of phone calls we get everyday, the money we spent on watching movies every year, names of customers who will enter the queue for the teller, the colours of the cars which are going to enter a toll gate in the next one hour, whether or not the next flight is going to be late, etc. The first three are random variables, but other three are not. Among the things that are uncertain, some are random variables, some are not.
So, what characterizes a random variable? Let’s see some definitions.
- A random variable is a function that associates a real number with each element in the sample space. (Walpole, 1993)
- A random variable is a function associated with an experiment whose values are real numbers and their occurence in the trials depends on chance. (Kreyszig, 1993)
First, the value of a random variable should be real numbers. Names are not numbers, therefore names of customers who will enter the queue for the teller cannot be a random variable. Similarly, colours are not numbers, so the colours of the cars which are going to enter a toll gate in the next one hour cannot be a random variable. Second, the occurence of the numbers is by chance. We do not know what value will occur next. Monthly revenue of a business is a random variable. Its values are real numbers such as (in dollars) 30 thousand, 45 thousand, 100 thousand, etc. Besides, their occurence is by chance. We do not know whether next month’s revenue will be $30,000, $45,000 or other values. As in the previous case, the number of phone calls we get everyday is a random variables. The values can be 0 (no call), 1, 2, 3, etc. They are real numbers and occur by chance. And we do not know how many calls we will get tomorrow.
Every random variable has a probability distribution associated with it. It seems logical because the occurence of the random variable’s values is by chance. Let’s see an introductory example. Consider a simple bet in which two dice are rolled simultaneously. If both dice show the same number of spots, we win and receive 60 dollars. Otherwise, we lose and pay 12 dollars. Let X be the amount of money we earn from the bet. X is a random variable. Why? The values it can take are 60 dollars and (-12) dollars. [The negative sign indicates that we lose.] There is a $\frac{1}{6}$ probability of winning 60 dollars and a $\frac{5}{6}$ probability of losing 12 dollars. [See Example 2P in https://edcommstatistics.blogspot.com/2019/09/the-theoretical-probabilities.html] Here, we see that each possible values of X has some certain amount of probability associated with it. The ordered pairs $(60,\frac{1}{6})$ and $(-12,\frac{5}{6})$ are the members of a set called probability distribution. The probability distribution can be presented in a table as follows.
Let’s see another example. In a lottery, two coins are tossed simultaneously, once. The amount of money that we received is based on the number of coins showing the Head (H) side. If no Head appears we get no money. If one H appears, we get 10 dollars. If two H’s appear, we receive 20 dollars. Let Y be the amount of money we earn from the bet. Assuming the coins are fair, we have the following probability distribution.
(Please refer to Example 3P in https://edcommstatistics.blogspot.com/2019/09/the-theoretical-probabilities.html to get the probability values)
THE MEAN OF A RANDOM VARIABLE
Let X be a discrete random variable with probability distribution f. The mean of X, denoted by E[X], is defined as:
$E[X] = \displaystyle \sum_{x}^{ \: } \: x \cdot f(x)$
where the summation takes all possible value of x.
THE MEAN OF A RANDOM VARIABLE
Let X be a discrete random variable with probability distribution f. The mean of X, denoted by E[X], is defined as:
$E[X] = \displaystyle \sum_{x}^{ \: } \: x \cdot f(x)$
where the summation takes all possible value of x.
Comments
Post a Comment