Jump to content

AllCombinations

Members
  • Posts

    29
  • Joined

  • Last visited

Profile Information

  • Favorite Area of Science
    physics

AllCombinations's Achievements

Quark

Quark (2/13)

0

Reputation

  1. Thank you for your feedback, Eise. However, my inquiry remains unanswered: in philosophy, would one provide a proof as we have done here? It is common practice in mathematics, but would it be done in philosophy? Did Kant do it in his Critique of Pure Reason? (I haven't read it. It was a title that sprung to mind.)
  2. Hello. I am currently studying "Introduction to Logic" 2nd Edition by Harry J. Gensler and I have a question about writing logical proofs. The book's preferred method is one of assuming the opposite of an argument and then taking the original argument apart while looking for a contradiction to arise. If a contradiction arises from assuming the opposite of the original conclusion, then (in a binary system of true and false statements) the original conclusion is proven to follow and the argument is said to be valid. My question is this: I know that in mathematics it is very common - even required - to prove a statement; that is, a claim will be made and then a proof will immediately follow the claim up. But what is common practice in philosophy? Are statements made but then it is left to the reader to work out for themselves if the reasoning is logical (if not sound)? Or is proof required here too? Of course I do understand the difference between validity and soundness, but is it common to prove validity? To illustrate, here is a problem from the book I mentioned: "Some are logicians. Some are not logicians. Therefore, there is more than one being." (problem 2, section 9.2b, pg. 210) Strictly in terms of validity, would a philosophical text simply leave it at this? Or would it be correct to include a proof such as the following? That just seems like a mess... I mean, look at it. But I have never read a philosophical text before, so I don't know. In philosophy, what is proven and what isn't? And how does it compare to mathematics? Thanks ahead of time.
  3. While we are on the subject of determinants, here is an interesting problem I found in an old linear algebra text. It reads that if you have two distinct points [latex](x_{1},y_{1})[/latex] and [latex](x_{2},y_{2})[/latex], then solving the equation [latex]det\left(\begin{array}{lll}1 & x & y\\1 & x_{1} & y_{1}\\1 & x_{2} & y_{2}\end{array}\right)=0[/latex] for y yields a line that passes through those two points. Why is that and what is this peculiar form of interpolation/curve-fitting? It seems to me that because determinants measure the area of a parallelogram in [latex]\mathbb{R}^{2}[/latex], then defining a matrix with two variables - in this case, x and y - and setting it equal to zero causes the parallelogram to have a zero area and the result is an empty parallelogram that acts as a line through those two points. Isn't that odd? And then the columnspace (or just space) of the matrix is the plane in [latex]\mathbb{R}^{2}[/latex] and the rowspace (or dualspace) is the set of points along the line, of which two are defined but any combination of x and y for which said matrix's determinant is zero will also be in the rowspace. Where can I learn more about this? I would research it on my own but I don't know what it is called and when I try googling it all I get is stuff that doesn't seem to relate. Thanks a lot. Oh, and the text the problem came from is a book from 2000 by David C. Lay called "Linear Algebra and Its Applications" second edition page 206.
  4. Studiot, that property of determinants is really interesting. I've never seen that before. As for your comment on nonlinear quantities, and it did catch my attention as I can most certainly see the potential connections with non-linear concepts. As for your picture, I don't know what it is but it looks intriguing.
  5. They are numbers. Nothing more. As you said, I introduced them to keep the matrix square. The focus was less on the third row and more on the third column....
  6. Then two columns would be the same. I didn't say the determinant would never be zero. The point was that there is no immediate property of determinants (that I know of) concerning multiplication of rows or columns. Just a question about terminologies so I could learn more about this sort of matrix.
  7. Hello. If you have a matrix, a 2x2 say [latex] M = \left(\begin{array}{ll}a_{1} & a_{2}\\b_{1} & b_{2}\end{array}\right) [/latex] and supposing it is in some way expanded into a 3x3 matrix so that the third row or column is formed from some kind of multiplication of the first two, such as [latex] N = \left(\begin{array}{lll}a_{1} & a_{2} & a_{1}a_{2}\\b_{1} & b_{2} & b_{1}b_{2}\\c_{1} & c_{2} & c_{1}c_{2}\end{array}\right) [/latex] I was wondering if this process has a name because I am having trouble finding anything about it online. The reason for including a new row is because I am studying determinants and wanted to keep the matrix square. The determinant won't be zero because we multiplied two rows together rather than adding them. Does this have a name and, if it is common enough to have a name, what does this process mean geometrically? There is a rule for adding rows or columns but not for multiplying. Thanks. If I've not been clear I can try to be more so.
  8. Thank you for your detailed explanation. I found it very helpful.
  9. Thank you for your responses. I would be happy to clarify. The function[latex]f[/latex] is meant to be a periodic function with period [latex]T[/latex] that has zeros for every element of its domain, being the integers times the constant [latex]T[/latex]. To give a solid example, let [latex]f=sin\left ( \frac{2\pi }{T}x \right )[/latex] and let [latex]g=x^{2}[/latex]. Then [latex]f(0)=0, f(T)=0, f(2T)=0,[/latex] etc., so that all integer multiples of the period map to some zero of the sine function. And then the composition [latex]f\circ g^{-1}=f(g^{-1}(x))=sin\left(\frac{2\pi}{T}g^{-1}(x)\right)[/latex] is an aperiodic wave with a variable period whereby the original periodic wave has been bent out of shape, the idea being to take a regular wave and to make it aperiodic. Then we can describe the zeros of the composition as the points where [latex]f[/latex] is said to vanish. What are these values? They are all [latex]x[/latex] for which [latex]\frac{2\pi}{T}g^{-1}(x))=\pi z, z\in\mathbb{Z}[/latex] which, when solved for [latex]x[/latex] yields [latex]x=g\left(\frac{1}{2}Tz \right )[/latex]. The original goal was to communicate in a more formal manner. What could be done differently? I am thankful for your input. (Sorry for my errors or typos, if any; I hope the idea is still clear despite.)
  10. Hello. I would appreciate some help in determining how much sense the following really makes. I don't know very much about "writing mathematics" so any advice is very welcome. --- Let [latex]f:\mathbb{Z}T\rightarrow 0[/latex] be a periodic function with period [latex]T[/latex] and let [latex]g:\mathbb{Z}T\rightarrow \mathbb{R}[/latex] be invertible. Then the zeros of the composition [latex]f\circ g^{-1}:\mathbb{R}\rightarrow 0[/latex] is the set of all [latex]t[/latex] for which [latex]t=g\left(\frac{1}{2}Tz\right),z\in \mathbb{Z}[/latex], is satisfied. --- That is it. I can explain more if necessary. I welcome all constructive feedback, positive and negative. Thank you.
  11. Hi. I came across the following relation. Given two vectors v1 and v2, their exterior product is related to their tensor product by the relation [latex]v_1 \wedge v_2 = v_1 \otimes v_2 - v_2 \otimes v_1[/latex] which expands for three vectors [latex]v_1,v_2,v_3[/latex] as [latex]v_1 \wedge v_2 \wedge v_3 \wedge=v_1\otimes v_2\otimes v_3-v_2\otimes v_1\otimes v_3 +v_3\otimes v_1\otimes v_2 - v_3\otimes v_2\otimes v_1+v_2\otimes v_3\otimes v_1 - v_1\otimes v_3\otimes v_2[/latex] I get the basic idea of the exterior and tensor products but I don't know the notation for the right hand side permutation sum/product/whatever. The left side of the equation will be [latex]\bigwedge_{i=1}^{n}v_i[/latex] for [latex]v_1 \wedge v_2 \wedge ... \wedge v_n[/latex] Thanks!
  12. Thanks! And thanks for your encouragement. It's pretty rare on the Internet. Just talking about all of this stuff to someone else has helped me to think through it. Going back to the original example from that book I cited, we say that {1,x,x2} is a basis and {a,b,c} is a dual basis. Suppose then that we wish to perform polynomial interpolation of a function of the form y=a+bx+cx2 through the points, say, (1,1), (2,3), and (7,11). Then, in choosing a system of equations, if we pick a+b+c=1 a+2b+4c=3 a+7b+49c=11 Then the basis of the columns that would be formed by the left side of the system would be {1, x, x2}. Then, when these values are solved for, we get a=-17/15, b=33/15, and c=-1/15. Yet in maintaining the array format of the equation, then a, b, and c go down the rows when solved for each of these things as a system of equations or elimination or whatever. I am understanding it better, I think, at least in terms of matrices, rows, and columns. And this also gives me a better understanding of what the coefficients of an arbitrary function are. Thanks again.
  13. Um... I don't know. No, I guess not. I am still trying to get used to speaking in such abstract terms. I guess I was just trying to formulate a more concrete example such as a 2x2 matrix in R2. You're right though, I need to be careful with how things are phrased. According to wikipedia's article on row and column vectors, And the column space is the space spanned by the columns of a matrix... therefore, if the columns of a matrix are linearly independent then the columns span whatever space corresponds with the number of vectors. Again, for a 2x2 matrix that would be R2, assuming everything is linear. I don't know for sure how to be more careful in talking about such things. But suppose that the columns of that matrix are linearly independent but the rows are not. Then the space might be linearly independent but its dual space will not be. For instance, if the columns of the matrix correspond with the standard basis but the rows are identical. Hmmm... but that wouldn't work either, because then the columns would not be linearly independent. Does this imply that if the space is linearly independent that the dual space will be too? Or, to put it another way, that the space and the dual space will ALWAYS have the same dimension? Maybe i am getting a little off topic. Like I said, I am trying to adjust to thinking about things so abstractly. Maybe it would be better to find concrete examples first and then work from the specific to the general.
  14. Following your example, I ended up on the linear functional page of wikipedia, which reads (in part) that a linear functional (a.k.a. linear form, one-form, or covector) is a linear map from a vector space to its field of scalars. In R^n if vectors are represented as column vectors then the linear functionals are represented as row vectors. That is good enough for me. Then, given a vector space V over a field F, the dual space V* is the set of all linear maps from V to F... a.k.a. the linear functionals. So I can say that if we have a vector space V such as R^2 over the field of real numbers and we have a couple of vectors v1, v2, and if we arrange those vectors in a square array (a matrix) then if the columns of that matrix are in V then its rows are in V*? Does that mean that if the columns of the matrix correspond to some basis e1, e2 then the rows correspond to some "dual basis" e*1, e*2 which would then follow the rows? Do I have that right?
  15. Okay... so coefficients are usually over a field. A field is some set like the rationals, reals, or complex numbers. However, according to this book I found "Linear Algebra via Exterior Products" by Sergei Winitzk, found at http://www.ime.unicamp.br/~llohann/Algebra Linear Verao 2013/Material extra/Linear algebra via exterior product.pdf bottom of page 16 and into page 17 (pages 22, 23 of the pdf file) there is an example where the coefficients of a polynomial are defined as a dual basis to a polynomial with a basis 1, x, x^2. It sounds as if 1, x, x^2 is a basis then the arbitrary coefficients a, b, c are the dual basis. Is this always the relation between variables/indeterminants and coefficients in algebraic and transcendental functions? Or am I not understanding what that document is saying? Thanks in advance.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.