Jump to content

AllCombinations

Members
  • Posts

    29
  • Joined

  • Last visited

Everything posted by AllCombinations

  1. Thank you for your feedback, Eise. However, my inquiry remains unanswered: in philosophy, would one provide a proof as we have done here? It is common practice in mathematics, but would it be done in philosophy? Did Kant do it in his Critique of Pure Reason? (I haven't read it. It was a title that sprung to mind.)
  2. Hello. I am currently studying "Introduction to Logic" 2nd Edition by Harry J. Gensler and I have a question about writing logical proofs. The book's preferred method is one of assuming the opposite of an argument and then taking the original argument apart while looking for a contradiction to arise. If a contradiction arises from assuming the opposite of the original conclusion, then (in a binary system of true and false statements) the original conclusion is proven to follow and the argument is said to be valid. My question is this: I know that in mathematics it is very common - even required - to prove a statement; that is, a claim will be made and then a proof will immediately follow the claim up. But what is common practice in philosophy? Are statements made but then it is left to the reader to work out for themselves if the reasoning is logical (if not sound)? Or is proof required here too? Of course I do understand the difference between validity and soundness, but is it common to prove validity? To illustrate, here is a problem from the book I mentioned: "Some are logicians. Some are not logicians. Therefore, there is more than one being." (problem 2, section 9.2b, pg. 210) Strictly in terms of validity, would a philosophical text simply leave it at this? Or would it be correct to include a proof such as the following? That just seems like a mess... I mean, look at it. But I have never read a philosophical text before, so I don't know. In philosophy, what is proven and what isn't? And how does it compare to mathematics? Thanks ahead of time.
  3. While we are on the subject of determinants, here is an interesting problem I found in an old linear algebra text. It reads that if you have two distinct points [latex](x_{1},y_{1})[/latex] and [latex](x_{2},y_{2})[/latex], then solving the equation [latex]det\left(\begin{array}{lll}1 & x & y\\1 & x_{1} & y_{1}\\1 & x_{2} & y_{2}\end{array}\right)=0[/latex] for y yields a line that passes through those two points. Why is that and what is this peculiar form of interpolation/curve-fitting? It seems to me that because determinants measure the area of a parallelogram in [latex]\mathbb{R}^{2}[/latex], then defining a matrix with two variables - in this case, x and y - and setting it equal to zero causes the parallelogram to have a zero area and the result is an empty parallelogram that acts as a line through those two points. Isn't that odd? And then the columnspace (or just space) of the matrix is the plane in [latex]\mathbb{R}^{2}[/latex] and the rowspace (or dualspace) is the set of points along the line, of which two are defined but any combination of x and y for which said matrix's determinant is zero will also be in the rowspace. Where can I learn more about this? I would research it on my own but I don't know what it is called and when I try googling it all I get is stuff that doesn't seem to relate. Thanks a lot. Oh, and the text the problem came from is a book from 2000 by David C. Lay called "Linear Algebra and Its Applications" second edition page 206.
  4. Studiot, that property of determinants is really interesting. I've never seen that before. As for your comment on nonlinear quantities, and it did catch my attention as I can most certainly see the potential connections with non-linear concepts. As for your picture, I don't know what it is but it looks intriguing.
  5. They are numbers. Nothing more. As you said, I introduced them to keep the matrix square. The focus was less on the third row and more on the third column....
  6. Then two columns would be the same. I didn't say the determinant would never be zero. The point was that there is no immediate property of determinants (that I know of) concerning multiplication of rows or columns. Just a question about terminologies so I could learn more about this sort of matrix.
  7. Hello. If you have a matrix, a 2x2 say [latex] M = \left(\begin{array}{ll}a_{1} & a_{2}\\b_{1} & b_{2}\end{array}\right) [/latex] and supposing it is in some way expanded into a 3x3 matrix so that the third row or column is formed from some kind of multiplication of the first two, such as [latex] N = \left(\begin{array}{lll}a_{1} & a_{2} & a_{1}a_{2}\\b_{1} & b_{2} & b_{1}b_{2}\\c_{1} & c_{2} & c_{1}c_{2}\end{array}\right) [/latex] I was wondering if this process has a name because I am having trouble finding anything about it online. The reason for including a new row is because I am studying determinants and wanted to keep the matrix square. The determinant won't be zero because we multiplied two rows together rather than adding them. Does this have a name and, if it is common enough to have a name, what does this process mean geometrically? There is a rule for adding rows or columns but not for multiplying. Thanks. If I've not been clear I can try to be more so.
  8. Thank you for your detailed explanation. I found it very helpful.
  9. Thank you for your responses. I would be happy to clarify. The function[latex]f[/latex] is meant to be a periodic function with period [latex]T[/latex] that has zeros for every element of its domain, being the integers times the constant [latex]T[/latex]. To give a solid example, let [latex]f=sin\left ( \frac{2\pi }{T}x \right )[/latex] and let [latex]g=x^{2}[/latex]. Then [latex]f(0)=0, f(T)=0, f(2T)=0,[/latex] etc., so that all integer multiples of the period map to some zero of the sine function. And then the composition [latex]f\circ g^{-1}=f(g^{-1}(x))=sin\left(\frac{2\pi}{T}g^{-1}(x)\right)[/latex] is an aperiodic wave with a variable period whereby the original periodic wave has been bent out of shape, the idea being to take a regular wave and to make it aperiodic. Then we can describe the zeros of the composition as the points where [latex]f[/latex] is said to vanish. What are these values? They are all [latex]x[/latex] for which [latex]\frac{2\pi}{T}g^{-1}(x))=\pi z, z\in\mathbb{Z}[/latex] which, when solved for [latex]x[/latex] yields [latex]x=g\left(\frac{1}{2}Tz \right )[/latex]. The original goal was to communicate in a more formal manner. What could be done differently? I am thankful for your input. (Sorry for my errors or typos, if any; I hope the idea is still clear despite.)
  10. Hello. I would appreciate some help in determining how much sense the following really makes. I don't know very much about "writing mathematics" so any advice is very welcome. --- Let [latex]f:\mathbb{Z}T\rightarrow 0[/latex] be a periodic function with period [latex]T[/latex] and let [latex]g:\mathbb{Z}T\rightarrow \mathbb{R}[/latex] be invertible. Then the zeros of the composition [latex]f\circ g^{-1}:\mathbb{R}\rightarrow 0[/latex] is the set of all [latex]t[/latex] for which [latex]t=g\left(\frac{1}{2}Tz\right),z\in \mathbb{Z}[/latex], is satisfied. --- That is it. I can explain more if necessary. I welcome all constructive feedback, positive and negative. Thank you.
  11. Hi. I came across the following relation. Given two vectors v1 and v2, their exterior product is related to their tensor product by the relation [latex]v_1 \wedge v_2 = v_1 \otimes v_2 - v_2 \otimes v_1[/latex] which expands for three vectors [latex]v_1,v_2,v_3[/latex] as [latex]v_1 \wedge v_2 \wedge v_3 \wedge=v_1\otimes v_2\otimes v_3-v_2\otimes v_1\otimes v_3 +v_3\otimes v_1\otimes v_2 - v_3\otimes v_2\otimes v_1+v_2\otimes v_3\otimes v_1 - v_1\otimes v_3\otimes v_2[/latex] I get the basic idea of the exterior and tensor products but I don't know the notation for the right hand side permutation sum/product/whatever. The left side of the equation will be [latex]\bigwedge_{i=1}^{n}v_i[/latex] for [latex]v_1 \wedge v_2 \wedge ... \wedge v_n[/latex] Thanks!
  12. Thanks! And thanks for your encouragement. It's pretty rare on the Internet. Just talking about all of this stuff to someone else has helped me to think through it. Going back to the original example from that book I cited, we say that {1,x,x2} is a basis and {a,b,c} is a dual basis. Suppose then that we wish to perform polynomial interpolation of a function of the form y=a+bx+cx2 through the points, say, (1,1), (2,3), and (7,11). Then, in choosing a system of equations, if we pick a+b+c=1 a+2b+4c=3 a+7b+49c=11 Then the basis of the columns that would be formed by the left side of the system would be {1, x, x2}. Then, when these values are solved for, we get a=-17/15, b=33/15, and c=-1/15. Yet in maintaining the array format of the equation, then a, b, and c go down the rows when solved for each of these things as a system of equations or elimination or whatever. I am understanding it better, I think, at least in terms of matrices, rows, and columns. And this also gives me a better understanding of what the coefficients of an arbitrary function are. Thanks again.
  13. Um... I don't know. No, I guess not. I am still trying to get used to speaking in such abstract terms. I guess I was just trying to formulate a more concrete example such as a 2x2 matrix in R2. You're right though, I need to be careful with how things are phrased. According to wikipedia's article on row and column vectors, And the column space is the space spanned by the columns of a matrix... therefore, if the columns of a matrix are linearly independent then the columns span whatever space corresponds with the number of vectors. Again, for a 2x2 matrix that would be R2, assuming everything is linear. I don't know for sure how to be more careful in talking about such things. But suppose that the columns of that matrix are linearly independent but the rows are not. Then the space might be linearly independent but its dual space will not be. For instance, if the columns of the matrix correspond with the standard basis but the rows are identical. Hmmm... but that wouldn't work either, because then the columns would not be linearly independent. Does this imply that if the space is linearly independent that the dual space will be too? Or, to put it another way, that the space and the dual space will ALWAYS have the same dimension? Maybe i am getting a little off topic. Like I said, I am trying to adjust to thinking about things so abstractly. Maybe it would be better to find concrete examples first and then work from the specific to the general.
  14. Following your example, I ended up on the linear functional page of wikipedia, which reads (in part) that a linear functional (a.k.a. linear form, one-form, or covector) is a linear map from a vector space to its field of scalars. In R^n if vectors are represented as column vectors then the linear functionals are represented as row vectors. That is good enough for me. Then, given a vector space V over a field F, the dual space V* is the set of all linear maps from V to F... a.k.a. the linear functionals. So I can say that if we have a vector space V such as R^2 over the field of real numbers and we have a couple of vectors v1, v2, and if we arrange those vectors in a square array (a matrix) then if the columns of that matrix are in V then its rows are in V*? Does that mean that if the columns of the matrix correspond to some basis e1, e2 then the rows correspond to some "dual basis" e*1, e*2 which would then follow the rows? Do I have that right?
  15. Okay... so coefficients are usually over a field. A field is some set like the rationals, reals, or complex numbers. However, according to this book I found "Linear Algebra via Exterior Products" by Sergei Winitzk, found at http://www.ime.unicamp.br/~llohann/Algebra Linear Verao 2013/Material extra/Linear algebra via exterior product.pdf bottom of page 16 and into page 17 (pages 22, 23 of the pdf file) there is an example where the coefficients of a polynomial are defined as a dual basis to a polynomial with a basis 1, x, x^2. It sounds as if 1, x, x^2 is a basis then the arbitrary coefficients a, b, c are the dual basis. Is this always the relation between variables/indeterminants and coefficients in algebraic and transcendental functions? Or am I not understanding what that document is saying? Thanks in advance.
  16. Yeah... well, I am going to go study the abstract algebra series on youtube. I guess I will get back to you.
  17. Well, according to wikipedia at least, https://en.wikipedia.org/wiki/Polynomial_ring#The_polynomial_ring_K.5BX.5D the coefficients are elements of a field, whatever a "field" is. Learning is hard with that site because a polynomial ring in terms of fields is directed to a page that says that a field is a kind of a ring. Kind of a catch-22, which you get a lot with wikipedia. I don't have any books that talk about rings and fields and groups though.
  18. That is interesting, except I do not know what rings and fields are. In looking them up online I see that they appear to be concepts in abstract algebra, which I have yet to learn. I have only studied linear algebra. Is abstract algebra a requisite to comprehending how the coefficients of a function (polynomial, rational, transcendental, etc.) affect the function itself and how thoes coefficients behave together or might be considered as a set unto themselves? I am largely self-taught and I am looking for guidance in what to learn. Math is so vast a subject and I am not sure which direction to head in. Specifically, apart from wishing to comprehend and discuss coefficients in the manner I have mentioned, I have a textbook on tensor analysis (Tensor Analysis by Richard L. Bishop and Samuel I. Goldberg) and another that claims to be indepth on the determinants of matrices (Determinants and Their Applications in Mathematical Physics by Robert Vein and Paul Dale). Would it be more beneficial to study these two books first and then an abstract algebra course, or would either one of these make decent requisites to abstract algebra? Or do none of these necessarily follow from the other? In particular, I suppose I am interested in how coefficient matrices might be studied. I am looking to save myself time by asking for guidance in how to approach these various subjects, how they relate to each other, and so on, and I am beginning to worry that I am repeating myself and being redundant so I will stop here. Thank you for your help, by the way.
  19. What are coefficients, really? The elementary answer is, for, say, a polynomial of the form y=ax^2+bx+c, that a and b are coefficients that are usually held to be constant or that might be construed as being variables in some cases, and that c is a constant that is, in the case of a polynomial, the y intercept. But what ARE coefficients? What is the formal definition of each one and/or what they are as a set? That is, if we say that the elements of a polynomial (or any function/mapping) are S={a1,a2,...,an}, what is this set by itself? And the answer should include polynomials but can include rational expressions or linear functions, though I know that in the linear case the numbers m and b, for instance, are the slope and y intercept. But what do these numbers make up as a set? Maybe there is no greater definition but if there is one I would like to hear it. Thanks!
  20. Thanks for all of your input. I appreciate the help. I think I have a clearer understanding of the difference between interpolation and optimization now.
  21. Very nice. So, it would seem, for any three points a circle can be found to fit through them, unless they are colinear in which case the circle would have an infinite radius. So could interpolation be described as a form of optimization given a set of restraints? For example, an optimization through n points with the additional condition/restraint that the form must be a polynomial/rational/exponential/etc. Or perhaps I have it backwards and we should look at it as a form type, for instance a sine wave, that must fulfill the additional condition that it pass through a set of "special" points, special in that they are not what the sine wave would pass through on their own. Is that right, or is interpolation nothing to do with optimization?
  22. So I Googled Hamilton - Lagrange and somehow ended up reading information on the Euler-Lagrange method. I watched https://www.youtube.com/watch?v=08vJyA-XD3Q which shows that the optimal path between two points is a straight line. My immediate question on this is how do we find the shortest path through, say, three points? I do not mean two linear equations that go from point a to point b and then from point b to point c, but rather how might one go about finding an optimal path? Does it become too complicated? Or is it a matter of there being many solutions? Right now I am not thinking about gravitational fields or air resistance, but only in pure terms as in interpolation. I welcome any input, insight, or suggeted reading.
  23. Constant fighting and stupidity is why I try to avoid most forums. I agree, this is a nice one. The calculus of variations? I will certainly look in to it. To "determine the best curve through data points according to some pre established criterion" sounds like exactly what I have in mind. I like the beauty of pure mathematics but I also lean toward application. That the physical world speaks in mathematics is one of the things I like most about mathematics. Have a good night.
  24. I understood your explanation, though I did not understand how your triangular grid relates to interpolation. I also understand the concept of a differential equation but I do not know what a finite element grid is, at least not by name. I don't know if that answered your question or not. Apart from taking a few local classes I am almost entirely self-taught by buying books and reading them, and from online courseware by Khan Academy, MIT, etc. on youtube. Teaching myself is something I can do. The hardest part is knowing what subjects to study and in what order. For instance, I just taught myself linear algebra and now I am just starting books on tensor analysis (the book by Bishop and Goldberg and another one by Grinfeld who also has a lecture series on Youtube) but I could just have easily tried tensors first only to realize I needed linear algebra as a prerequisite. I was lucky in this case. I didn't always have a particular goal in mind because small schools are the only ones I have ever had available to me and they do not teach anything further along than multivariable calculus and first-order ordinary differential equations. Now that I have explored on my own I find the subject of interpolation to be particularlly fascinating... I just don't know what to study or what direction to even head in. I mentioned numerical analysis but I don't really know if that is the right direction? I don't spend much time on forums but I thought it was a good time to first ask if there is one single method for fitting most functions before I further explored the appropriate subjects (whatever they turn out to be) or if the only way to learn how to fit a wide range of functions would be to study interpolation and curve-fitting in more depth. If I strike you as someone who doesn't really seem to know what they are doing, you are correct. That is why I was seeking advice. In summary, I understand calculus, (basic) differential equations, and linear algebra, I am now in the first few pages of a book on tensors, and I am interested in learning either about a general method for fitting a lot of different kinds of functions (algebraic and transcendental, anyway) or in seeking out a wide range of these methods, or both.
  25. studiot: That is really fascinating. Thank you for taking the time to do all of that. You explained it really well. In your opinion, what subject or subjects should I study in order to learn more about interpolation and its various associated methods? Numerical analysis? Any other information is welcome. Thanks again for giving your time to share all of that. And no worries about the "contrived" remark. I wondered but I understand now. overtone: That is interesting. Interesting too that math has applications in those subjects, though I guess it makes sense when I consider topics like population, predator-prey models, and the spread of disease.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.