Jump to content

joigus

Senior Members
  • Posts

    4395
  • Joined

  • Days Won

    49

Everything posted by joigus

  1. I have to agree with studiot's disagreement. That's one of the most common obfuscations when studying thermodynamics (TD). In TD you never go outside the surface of state, defined by the equation of state f(P,V,T,n)=0. That's why they most emphatically are not independent variables. This is commonly expressed as the fundamental constraint among the derivatives: which leads to unending "circular" pain when trying to prove constraints among thermodynamic coefficients of a homogeneous substance, for teachers and students alike. 'Kinetics' is kind of a loaded word. Do you mean dynamics vs kinematics in the study of motion, or as in 'kinetic theory of gases', 'chemical kinetics'? Sorry, I really don't understand. But I would really be surprised that a theory about anything in Nature missed the energy arguments. Sometimes you can do without it, but there are very deep reasons for energy to be of central importance. I would elaborate a bit more if you helped me with this.
  2. "It is the customary fate of new truths to begin as heresies and to end as superstitions"

    T. H. Huxley

  3. I concur with swansont. Only, I think he meant, W = - delta(PV) assumes constant P when he said, as W = -P(delta V) is just the definition of work for a P, V, T, n system (the simplest ones.) And when n, T are constant ==> d(PV)=0 ==> W = -pdV = +VdP (for that case in an ideal gas.) Just to offer a mathematical perspective. If you differentiate (increment) PV=nRT, you get PdV+VdP = nRdT (d=your "delta"=increment, small change) or for varying n, PdV+VdP = RTdn+nRdT because, as swansont says, you must know what's changing in your process, and how. You see, in thermodynamics you're always dealing with processes. To be more precise, reversible processes (That doesn't mean you can't do thermodynamic balances for irreversible processes too, which AAMOF you can.). Whenever you write "delta," think "process." So, as swansont rightly points out, what's changing in that process? The culprit of all this is the fact that physics always forces you to consider energy, but in thermodynamics, a big part of that energy is getting hidden in your system internally, no matter what you do, in a non-usable way. This is very strongly reflected in the first principle of thermodynamics, which says that the typical ways of exchange of energy for a thermal system (work and heat) cannot themselves be written as the exchange of anything even though, together, they do add up to the exchange of something (here and in what follows, "anything," "something," meaning variables of the thermodynamic state of a system: P, V, T, PV, log(PV/RT), etc.) So your work is -PdV, but you can never express it as d(something). We say it's a non-exact differential. It's a small thing, but not a small change of anything The other half of the "hidden stuff" problem is heat, which is written as TdS, S being the entropy and T the absolute temperature, but you can never express it as d(something). Again, a non-exact differential. And again, A small thing, but not a small change of anything Enthalpy and Gibbs free energy are clever ways to express heat exchange and work as exact differentials, under given constrictions for the thermodynamic variables. And Helmholtz's free energy is something like the mother of all thermodynamic potentials and its true pride and joy.
  4. Yes, taeto, you are right, unless I'm too sleepy to think straight. The thing that's missing in your argument is the transformation matrix, which is, I think, what you mean by, I don't know if you're aware of it, but any Gauss reduction operation can be implemented by a square non-singular matrix. A change-of-basis or "reshuffling" matrix. Let's call it D. So that, AB = ADD-1B = A'B' The "indexology" goes like this: (mxn)x(nxm) = (mxn)x(nxn)x(nxn)x(nxm) The first factor would be an upper-triangular matrix (guaranteed by theorem that I can barely recall) but, as it has fewer columns than rows, at least the lower row must be the zero row, so that the product must have a zero row. Right? (AAMOF you can do the same trick either by rows on the left or columns on the right; it's one or the other. Then you would have to apply a similar reasoning to B instead of A, you're welcome to fill in the details.) This is like cracking nuts with my teeth to me, sorry. That's what I meant when I said, But that was a very nice piece of reasoning. It's actually not a change-of-basis matrix, but a completely different animal.
  5. You may be right. Dimensional arguments could work. Let me think about it and get back to you in 6+ hours. I have a busy afternoon. Thank you!
  6. Gaussian elimination does not help here. The reason being that it requires you to reduce your matrix to a triangular form, and in order to do that, you need the actual expression of the matrix, not a generic amn
  7. How can a continuum be a constant? Could you elaborate on that? Maybe you're on to something. Can a stone be unhappy? See my point? If there is **one** feature of gravity that singles it out from every other force in the universe is the fact that you can always locally achieve absence of gravity (equivalence principle, EP). The only limit to this is second-order effects, AKA tidal forces. Jump off a window and you'll find out about EP. Get close to a relatively small black hole and you'll find out about tidal forces. Read a good book and you'll find out about how this all adds up. Oh, and mass is not concerned at all in GTR, as it plays no role in the theory. It's all about energy. It's energy that provides the source of the field. What you call mass is just rest energy, and this is no battle of words. Photons of course have no mass because they have no rest energy; and they have no rest energy because... well, they have no rest. Incorrect: Special Relativity (SR) says nothing (massless or not) can travel faster than the speed of light. Because GTR says geometry of space-time must locally reduce to SR, things moving locally can't exceed c. In other words: things moving past you can't do so at faster than c. People here have been quite eloquent so I won't belabor the point. I don't want to be completely negative. My advice is: Read some books, with a keen eye on experimental results; then do some thinking; then read some more books; then some more thinking, and so on. Always keep an eye on common sense too. Listen to people who seem to know what they're talking about, ask nicely for inconsistencies and more information, data. Always be skeptic, but don't just be skeptic. It doesn't lead anywhere.
  8. Sorry, I meant And d2/dx2 of (0 0 1 0 0 0) is (2 0 0 0 0 0) just as d2/dx2 of x2 is 2 times 1.
  9. Depends on what they want to illustrate with it. Do you mean state of uncertainty? It's actually a paradox that cosmologists face every single day. No down-to-Earth physicists worry about it, because they use the quantum projection, or collapse, or wave packet reduction, as you may want to call it. They know whether the cat is dead or alive. As to cosmologists... who was looking at the universe when this or that happened, you know? It's not useful for anything, it's just there, looking us in the face. It's a pain in the brain. . Yes. No.
  10. If you are a future mathematician, I would advise you not to try to think of the cotangent space as something embedded in the space you're starting from. In fact, I bet your problem is very much like mine when I started studying differential geometry: You're picturing in your mind a curved surface in a 3D embedding space, the tangent space as a plane tangentially touching one point on the surface, and then trying to picture in your mind another plane that fits the role of cotangent in some geometric sense. Maybe perpendicular? No, that's incorrect! First of all try to think in terms of intrinsic geometry: there is no external space embedding your surface. Your surface (or n-surface) is all there is. It locally looks to insiders like a plane (or a flat space). What's the other plane? Where is it? It's just a clone of your tangent plane if you wish, that allows you to obtain numbers from your vectors (projections) in the tangent plane. It's the set of all the vectors you may want to project your vector against, therefore, some kind of auxiliary copy of you tangent space. That's more or less all there is to it. Sometimes there are subtleties involved in forms/vectors related to covariant/contravariant coordinates if you wish to go a step further and completely identify forms with vectors when your basis is not orthogonal. That's why mathematicians have invented a separate concept. Also because mathematicians sometimes need to consider a space of functions and the forms as a bunch of integrals (very different objects). In the less exotic case, the basis of forms identifies completely with the basis of contravariant vectors. I will go into more detail if you're curious about it or send you references. I hope that helps.
  11. 1) "How fast the shark is moving away from the lifeguard station" requires you to think about vectors. Picture an imaginary straight line lifeguard-shark and try to think how it changes. 2) The datum is the speed or rapidity (the norm, or intensity, or "modulus" of the vector), not that velocity. That's another line (parallel to the coast.) 3) Think Pythagoras. He was a very wise guy, or maybe a bunch of guys, nobody knows to this day. And I can't give you any more clues.
  12. I see no significant mistake in the enunciation of the principle. I wouldn't include time to it though, nor do I know of any formulation that does. Another hopefully useful observation is that isotropy everywhere implies homogeneity, which is kind of more economic to me, but not really a big deal. As to current limits to its application/validity/solidity, I hope you find interesting my comments below: The whole issue of the universe being homogeneous and isotropic at 'large' scales is, in my opinion, a very suspect hypothesis. It looks kind of reasonable, though, and allows you to gain access to the big picture of what goes on. But 1) from the theoretical perspective we do know that quantum field theory (QFT), when combined with the general theory of relativity (GTR) in inflationary models, predicts a universe that is more like a fractal, meaning a scale-independent series of embedded structures that may look clustering depending on what scale you look at it. And 2) from the observational point of view, the universe does seem to display huge voids in its structure, very strongly resembling that fractal that QFT+GTR predicts. It's more like the caustics in a swimming pool in 3D (this is a numerical simulation): About isotropy, a very recent piece of news from the experimental front is this: https://phys.org/news/2020-04-laws-nature-downright-weird-constant.html?fbclid=IwAR3_NdXDNfcNU05E8khtN1pnshucr-gr7KoJO5OTh6OAuDDX19Z5yUBPD_c The headline reads, "New findings suggest laws of nature 'downright weird,' not as constant as previously thought". UNSW --Sidney-- professor John Webb: "We found a hint that that number of the fine structure constant was different in certain regions of the universe. Not just as a function of time, but actually also in direction in the universe, which is really quite odd if it's correct... but that's what we found." If that's true, not only the universe wouldn't be homogeneous; it wouldn't be isotropic either, and at the deepest level, because what's different is the electromagnetic coupling constant itself. Now this would really be amazing and we should take it with a grain of salt. The statement that the universe is homogeneous in time is tantamount to saying that it looked pretty much the same in the past than now or in the future. It was obviously not the same in the past, as it looked like a singularity, then opaque to radiation and neutrinos (plasma), then radiation dominated, then matter dominated, and today it's considered to be dark-energy dominated. So it doesn't really look like it's going to be the same in the future, as it will exponentially expand.
  13. I don't know whether you're familiar with index notation. If you are, I think I can help you. If you aren't, I can't, because it's just too painful. They will have told you about Einstein's summation convention. Don't use it for this exercise, because if you do, you're as good as lost. The key is: you need m indices that run from 1 to m, and another bunch of m indices that run from 1 to n You also need the completely antisymmetric Lévi-Civita symbol: Now, the index that runs from 1 to n (the inner product index) I will call K1, ..., Kn The other multi-index I will call i1,...im And the third one, the second free index, I will fix to be 1, ..., m Then, Now it takes a little insight: The last factor is the det of m vectors in an n-dimensional space. As m>n, it is therefore a linearly dependent set, so it must be zero. You can understand this better if you think of the det as a multilinear function of m vectors.
  14. Well, yes, but you must be careful with a couple of things. First: If you integrate x5 you get off limits. x6 no longer is in your space. You must expand your space so as to include all possible powers. . Then you're good to go. Second: You must define your integrals with a fixed prescription of one limit point. For example, so that they are actually 1-valued mappings. Then it's correct. You don't have this problem with derivatives, as you can derive number zero till you're blue in your mouth and never get off limits. If you were using functions other than polynomials, you would have to be careful with convergence of your integrals. But polynomials are well-behaved functions in that respect. Hope it helps.
  15. You're right, there is a theorem. It's really to do with the fact that you've got a linear isomorphism, that is, a mapping such that, that is, that preserves the linear operations in your initial space. Your initial space must be a linear space too under (internal) sum and (external) multiplication by a constant. Now, objects A, B, etc. can be most anything. They can be polynomials, sin cos functions, anything. The key facts are both that the d/dx operator is linear and the polynomials under sum and product by scalars are a linear space. The would be assigning a vector to a polynomial. And your intuition is correct. There is no limit to the possible dimension of a linear space. Quantum mechanics, for example, deals with infinite dimension spaces, so the transformation matrices used there are infinite-dimensional matrices. In that case it's not very useful to write the matrices as "tables" on a paper. I hope that helps.
  16. Exactly right. Check it yourself. It's a fun exercise. On that space, the diff operator "is" the matrix.
  17. Exactly. 1 would be (1 0 0 0 0 0), x: (0 1 0 0 0 0), x2: (0 0 1 0 0 0), etc. (read as column vectors). And d/dx of (0 0 1 0 0 0) is (2 0 0 0 0 0) just as d/dx of x2 is 2 times 1.
  18. Depends on how you order your basis. Let's say {1,x,x2,...} (I'd pick a 'natural' basis, meaning one in which your matrix looks simpler, of course any non-singular linear combo of them would do). The transform of xn is n(n-1)xn-2, so what it does is to multiply by n(n-1) and shift it twice to the right (here's where the ordering of you basis matters in what the matrix looks like, so if you order the other way around, the T --transformation-- matrix would look like the transpose). So your matrix would be something like, That is, Please, check. I may have made some ordering mistake, missed a row, etc.
  19. Please, don't apologize. I wasn't making much sense biologically, and now I realize that I'm confusing really basic stuff in RNA maturation process. I really must go over my notes and books before I make more of a mess and take much more of your time. What does make sense in what I'm saying, I think, is that different reflection symmetries in the chain to be cut must play an important role in the cutting process, because the enzyme's job is to cut a line of covalent bonds, so a very high energy barrier must be overcome. As CharonY says: (my emphasis) So the general idea that I get from these comments is something like: Oh, so the enzymes that do the job of cutting must either twist the strand, bend it, then act like scissors... Now, these physical actions all require some handling of the object with different configurations of strong opposite pairs of force. This is very different from what is required in, e.g., helicases, which only need to overcome hydrogen bonds to untangle the double strand for reading and don't require any kind of symmetric grasping of the molecule that I can think of. In one of the images that you so kindly (but so much overestimating my understanding of biology) have attached, I've found something very interesting. I can see a single-stranded sequence under the tag hsa-miR-25-5p that reads, TCCGCCT Now, I don't know what the significance of that sequence is, but it is a palindrome in a different sense than palindromes in double-stranded DNA are. This is a palindrome of itself in the sense of ordinary-language palindromes. I.e., if you read it in a 5'-to-3' direction, instead of 3'-to-5', it doesn't change. The palindromes selected for cutting in double-stranded DNA, for example, are different. They are palindromes only if you apply a sequence of two "inversions". Take, for example, my blah-blah example, AGGCCT First invert (read 5' to 3' instead of 3' to 5'): AGGCCT --> TCCGGA Then complementary invert (A-->T, C-->G, G-->C, T-->A): TCCGGA --> AGGCCT And you're back where you started. The fact that different kinds of palindromes pop up when cutting, twisting, etc. are involved; I don't think is coincidental. Free-energy considerations don't interest me so much at this point, important though they are. Please don't trust me when I say anything strictly biological, as it's well over my head there. And do feel free to drop the conversation at any point if you don't find it useful or revealing or anything. Do 'dimers' here --or elsewhere in biology-- refer to primary structure only? Two symmetrically-placed terciary-structure blobs of protein weakly attached to each other wouldn't be a dimer, would they? My ignorance shows, I know. Thank you very much.
  20. Sure. Thanks a lot for your interest. I was referring to the possibility of selectively cleaving sequences in a similar way to how restriction enzymes are used to cleave DNA to mass-produce genomic libraries. In this case, though, the target would be RNA. The only example I know of RNA that has complementary (let's say "locally" double-stranded) sub-sequences is tRNA, and from what I remember cleavage of nucleic acids in Nature only happens on double-stranded sequences, e.g. tRNA cleaved by eukaryotes in the splicing process --and as I've just learnt from the references you provided some archaeas[!?]. In other words: would it be possible to mimic endonucleases' job with tRNA, synthesize them, maybe modify them for human purposes? Sorry I said "opposite" instead of "complementary" and the like. And thanks a lot for the references.
  21. Here's an idea that, if too out there, I'd wholeheartedly thank you to dismiss as nicely and informatively as possible. Something that very much drew my attention years ago about restriction enzymes is the fact that they always seem to act on palindromic sequences of DNA, not in the usual language-related sense, but in the sense that, for a bunch of code-bases, the sequences usually (maybe always?) are palindromes of their inverses in the "daughter" thread, like, AGGCCT TCCGGA (Sorry if there's a specially reserved codon there, I wasn't particularly careful in the example.) Now, I don't think that's a coincidence and I'm pretty sure there must be a physical reason for it. My best guess is that there must be a physical action, like a cutting torque at the molecular level (my background is theoretical physics) leading to the controlled chopping of the polymer precisely at that spot. Opposite pulling or pushing forces would lead to that in a way that's very intuitively easy to picture. Does that make any sense at all? If so, could a similar mechanism work for endonucleases on tRNA in opposite sequences attached for splicing? I'm not even sure that tRNA splice at palindromes or even if that's been understood in any detail. Thank you very much in advance.
  22. This may be a silly question, but just in case. Can someone tell me if there is the possibility of using RNA-splicing endonucleases like the ones referred to at, https://www.ncbi.nlm.nih.gov/pubmed/18217203 to target specific sections of a known virus in order to deactivate it inside the cell in a similar way that restriction enzymes in bacteria function against their bacteriophages? Maybe by designing or selecting them to look for very specific viral sequences but long enough so that they have a very low repeat probability? Or maybe by methylating the sensitive areas in the self RNA? I don't even remember if RNA is susceptible to methylation, which DNA is. Or maybe both? Is that even possible? Or maybe too out there. I'm not even sure this is the proper place to pose this question. I'm sorry if that's the case. Thanks a lot in advance.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.