  # DylsexicChciken

Senior Members

44

1 Neutral

• Rank
Quark

## Recent Profile Visitors

4062 profile views
• ### ali.areshey

1. Would the inverse be true if the sequence is not a piece-wise function? It seems two textbooks I have read have been implying that the inverse is valid if the conditions for alternating test for convergence is not satisfied, albeit the textbooks does not deal with piece-wise series.
2. So basically the converse is true in that it won't converge, thanks.
3. The alternative series test says if the absolute value of the terms are not increasing and the limit of the absolute value terms goes to 0, then the alternating series converges. What happens if the converse is satisfied, when the absolute value terms of an alternating series are increasing or the limit of the absolute value terms does not go to zero? If either test fails, can we say the alternating series is divergent?
4. EDIT: Expected run time = worst case rune time = O(N) for algorithm 3 not algorithm 2. I meant to copy and paste it for algorithm 3 but accidentally pasted it in algorithm 2. Sorry, I made a wrong edit yesterday. But now I can't edit anymore, so here is the edited version: I just need help verifying that I have the correct analyses to the problem below: Assume we have a random number generator function called randInt(i, j), which generates a random number between i and j. We will use this function to randomly generate a number between 1 and N, i.e., randInt(1, N). We want to fill an N-sized array with a random permutation of the first N integers using the randInt function above. We only want the run time analyses for the algorithms below: Algorithm 1 : Fill array arr_ from arr_ to arr_[N-1]. To do this, randomly generate numbers until you get a number that isn't contained in any of the filled slots of arr_. This ensures that all numbers from 0 to N is used. My analyses: My analysis says this algorithm has a probability, albeit extremely minute, that it will never terminate, so is this O(infinity)? This is because you can randomly generate a number that is already contained in the filled slots of arr_ every single time. After summation algebra, I find the expected run time to be O(N^2). Algorithm 2: This time we use an extra array. This array is an array of type bool, so each value is either 1 or 0. This array, which we can call contains_, is also of size N. If we randomly generate a number X, then we set contains_[X] to true, so the next time we see a randomly generated number and it turns out to be X, we don't actually have to check all of the filled slots of arr_ to know if arr_ already contains X, we merely check to see if contains_[X] is set to true. My analyses: This algorithm suffers the same problem as algorithm 1, in that you can randomly generate a number that is already contained in the filled slots of arr_ every single time. So O(infinity), albeit there is no longer a need for a checking loop to check for contains as it was in algorithm 1. Expected runtime = O(N). Algorithm 3: Fill the array one by one sequentially with 1, 2, 3, 4, ... , N. Then for each position arr_[ i ], swap it with a randomly generated location between 0 and i. e.g., for( i = 1; i < n; ++i ) swap( a[ i ], a[ randInt( 0, i ) ] ); My analyses: This has O(N) since the problem size is assuredly reduced by one each iteration, so it surely will reach N at some point unlike the last two problems. Expected run time = worst case rune time = O(N) Are my analyses correct?
5. I just need help verifying that I have the correct analyses to the problem below: Assume we have a random number generator function called randInt(i, j), which generates a random number between i and j. We will use this function to randomly generate a number between 1 and N, i.e., randInt(1, N). We want to fill an N-sized array with a random permutation of the first N integers using the randInt function above. Algorithm 1 : Fill array arr_ from arr_ to arr_[N-1]. To do this, randomly generate numbers until you get a number that isn't contained in any of the filled slots of arr_. This ensures that all numbers from 0 to N is used. My analyses: My analysis says this algorithm has a probability, albeit extremely minute, that it will never terminate, so is this O(infinity)? This is because you can randomly generate a number that is already contained in the filled slots of arr_ every single time. After summation algebra, I find the expected run time to be O(N^2). Algorithm 2: This time we use an extra array. This array is an array of type bool, so each value is either 1 or 0. This array, which we can call contains_, is also of size N. If we randomly generate a number X, then we set contains_[X] to true, so the next time we see a randomly generated number and it turns out to be X, we don't actually have to check all of the filled slots of arr_ to know if arr_ already contains X, we merely check to see if contains_[X] is set to true. My analyses: This algorithm suffers the same problem as algorithm 1, in that you can randomly generate a number that is already contained in the filled slots of arr_ every single time. So O(infinity), albeit there is no longer a need for a checking loop to check for contains as it was in algorithm 1. Expected runtime = worst case run time = O(N). Algorithm 3: Fill the array one by one sequentially with 1, 2, 3, 4, ... , N. Then for each position arr_[ i ], swap it with a randomly generated location between 0 and i. e.g., for( i = 1; i < n; ++i ) swap( a[ i ], a[ randInt( 0, i ) ] ); My analyses: This has O(N) since the problem size is assuredly reduced by one each iteration, so it surely will reach N at some point unlike the last two problems. Are my analyses correct?
6. So the 2^16 bits can store 65536 addresses, each address containing 1 byte. But how does the plus or minus 2^15 and 32,768 bytes factor into this? Plus or minus 2^15 implies there are negative addresses, meanwhile 32,768 bytes is only half the amount of addresses that can be represented.
7. I copied the paragraph from the book and forgot to edit some mistakes. The quote above is the fixed version. Yes this is referring to MIPS instruction code. I don't know how to do the plus or minus sign, so I put it in English instead of the plus or minus symbol. It's basically saying you can store 32,768 bytes in the space of 2^16 bits in the address section of I-format instruction. I am still having trouble wrapping my head around this
8. Let $s6 be the base address of an array A and$t0 contains some value. When you add: add $t0,$s6, $t0 Does register$s6 refer to a value or an address? More importantly, what does $t0 contain? Does$t0 contain a value or an address? How does the computer know whether the register refers to values or addresses? If it is a value, then how is it possible to do this afterward: lw $s0, 0($t0) How can a register have a set of offset addresses? Does the address of A carry over to $t0? If so where are they stored, because obviously$t0 is used to store the data. I thought offsets only applied to arrays. So does $s6 and$t0 both refer to an address and a value? Is $s6 by default equivalent to 0($s6)?
9. I think my book made a mistake, my teacher had the same book and the same edition with different wording which was clearer and meant something different from the exact version I have, which is weird. 32,768 bytes is way too large to be equivalent to 16 bits. 32,768 bytes is 32,768 * 8 bits = 262,144 bits. There are only 2 bytes in 16 bits because there are only 8 bits in 1 byte. The book probably meant plus or minus 32,768 integers, not bytes.
10. Book says 16 bits is equivalent to 32,678 bytes. How did my book get that?
11. My books says that when a computer tries to subtract a large binary number form a small binary number, the computer attempts to borrow from leading 0s, and the result is a string of leading 1s. Why is this the case, especially when the smaller number don't have leading 1s to accommodate the subtraction? So how can the result be leading 1s as if the smaller number was bigger? What is happening in the hardware that forces the computer to create a string of leading 1s?
12. For the topic of linear regression, my book says that the (coefficient of determination) = (correlation coefficient squared). But it doesn't explain why. So why is on equal to the square of the other when their formulas look completely different?
13. I was thinking of it in terms of the definition of a. It took some time, but I formulated the below intuition: If p-value $\leq$ a (the equal sign under the inequality is not showing on this forum for some reason): Getting our observed statistic from the sample is at most as likely as the chance of rejecting a true $H_0$. In other words, hence we have very likely a higher chance that $H_0$ is false than we have the chance of rejecting a true $H_0$. This simplifies to: we have very likely a higher chance of rejecting a false $H_0$ than the chance of rejecting a true $H_0$.
14. If a type I error a for a hypothesis test is 0.05, and the p-value=0.01. We reject $H_0$ because p-value $\leq$ a. What is the reasoning or intuition for this?
15. I looked through the book and I don't think I see anything called acceptance criteria or decision rules. I am still reading an early section on hypothesis testing, so it might be somewhere later on. I was referring to the value b (of the test procedures on the variable being tested, p) the probability value of type II error. Type I error, a, and Type II error, b, are inversely proportional. The statistician has only control over type I error, a, therefore the statistician can control type II error, b, indirectly by minimizing or maximizing type I error, a. So even knowing this, in practice the statistician still won't be able to find the value of b(so far I haven't learned how to calculate b, if there even is a way)? I am still in an early part of the hypothesis testing chapter, so the information I learned so far should be about basic concepts of error and interpreting a problem.
×