Jump to content

Enthalpy

Senior Members
  • Posts

    3887
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Enthalpy

  1. Found some answers. ----- All actors consider wavelengths like 13.5nm or 11nm. The synchrotron and undulator I considered is already an old idea, the current advances being fluorescence in various forms, which provides more power than the half-watt typical of synchrotrons and undulators. Present sources are ArF excimer lasers, they deliver for instance 60W at 193nm. Lenses are a difficulty because materials are opaque. Mirrors aren't much better, achieving 50% reflectivity at normal incidence. ----- I wanted to give the accelerating cavities the same wavelength as the undulator, and now I'm confident this already exists. I wanted to add a Perot-Fabry cavity around the undulator to increase the output power. This has been considered but is difficult at such a wevelength because neither mirrors nor light guides are efficient. Apparently I won't bring anything new nor useful in this topic and give it up.
  2. Hope to have understood the question... Biomass is usable quickly with present technology, storable, nearly affordable - but from my former estimates we have too little farmland to answer all our energy needs, even if converting cellulosis. It would need to use deserts, Ocean floor... Geothermy - wind - Solar are definitely abundant enough. They may be available when we need them (Solar thermal...) and affordable; right now they're roughly as expensive as nuclear electricity, which means hugely more expensive than gas and coal.
  3. http://en.wikipedia.org/wiki/Ionosphere#Radio_application except that calling this "refraction" is misleading! In the case of our ionosphere, it's reflection against a conductive plasma. You're perfectly right, the index of air is nearly 1, which would not allow a total refraction with an interestnig angle against any medium with index >1. Meta-materials can have an index <1 but only over a very limited frequency range, near some kind of resonance. Similar to a waveguide that propagates the phase significantly faster than c slightly above its cutoff frequency.
  4. Conducting path. http://en.wikipedia.org/wiki/Lightning_rod
  5. Hello semiconductor technologists! I'm thinking at a source of extreme UV for semiconductor lithography, so what would be your wishes, as compared to the first figures that emerge from my rantings? I take 30nm wavelength. Would less be better? I fear fluorescence hampers lithography. Right now I estimate half a watt of light is produced, continuous and monochromatic and coherent, initially in a narrow beam of small divergence. Could that be enough? This absolute silver bullet would cost several millions. Is that any worry? Thanks! Marc Schaefer, aka Enthalpy
  6. That's the fundamental idea, yes. Then, you can add subtleties: - Long jumps exist, and because the executable code contains addresses and other information whose compression has a different efficiency as the instructions, you need some method to predict which page of the compressed executable contains the sought address. A table of addresses where each page begins? A table of jump addresses? (Difficult with computed jumps) - The real goal, but difficult, is x64 code - nearly 100% of the Cpu market soon, and about the rest is i386 anyway. i386 code is highly irregular because of history, x64 may be easier. Risc is better but hasn't taken the market, alas, so forget this theoretical possibility. - You have to compress addresses as well since these make a good proportion of x64 code. They waste 64 bits despite programmes and data need less, so something like identifying a few segments and recoding as relative addresses should save a lot. Check if dynamic addresses can be created. Mind the paging mechanism. Use associative memory if needed. - Among the 128 instructions you give as an example, some are more frequent and should have a shorter code - this is known compression theory. Other compression methods identify data that frequently belongs together, say Decrement then Compare followed by Branch, and replace the group with a shorter code. Though, wired off-the-fly hardware limits the achieveable subtlety of compression. - But the no-execute bit (since Amd64, P4, Windows Xp) tells you which page contains data or programme, relieve. Two extreme approach would involve: - Use a known compression method (Lzma...) not specialized in programmes, see how to hardwire it; probably inefficient. - Take the i64 instruction set, decide by hand how to compact it. The intermediate method would let a compressor run on a few i64 programmes, and then you check by hand what you keep or not, freeze it and hardwire it. UPX is a software decompressor (and compressor) meant for executables http://upx.sourceforge.net/ if it could be hardwired as is, this would be the answer, but I've no idea if this is possible. Or, simplify it until hardware expands on-the-fly. This looks like the first-choice approach, because software decoding by Upx already improves the performance of computers with slow disks. And people at sourceforge may be helpful. To be clear: as a software, UPX loads compressed executable from the disk in the Ram and expands it in the Ram, so it saves no Ram throughput nor size. The hardware expander I imagine keeps the executable compressed in the Ram and maybe L3 cache and expands it in the Cpu, to save Ram throughput and size. Expansion would be between Ram and L3 or between L3 and L2. Beware that in L1 cache, the code is transformed into micro-op and macro-op since the P4, still a different story, so I'd keep my fingers away from L1. Such engineering details make the project less obvious than it first seems - but hey, you must deserve your diploma! And the implications are big, as Ram throughput makes much of the difference between a socket 1155 and a socket 2011. Think as well at Ram size, disk throughput, disk size... Fun? At least to my taste, yes. Marc Schaefer, aka Enthalpy
  7. I don't know if this is what you're looking for... Not published in ieee nor acm as far as I know. I had put it there and elsewhere, but it has disappeared for some reason: http://www.msfn.org/board/index.php?showtopic=129474 I'll have to give a look. Anyway, it's easy enough that you can put the first hard thoughts on it without previous research papers. And at least it should be original. I imagine executable programmes can be compressed on hard disk drives, Ram and maybe L2 cache, so these all get a better throughput. Not a huge and complicated compression: just something fast hardware can expand without time penalty, on the Cpu chip. Even if you gain 30% code size, this means money: you can have 2 banks of Ram instead of 3 for instance. And 64b programmes need compression more badly because they're more redundent than 32b programmes. Some readers had understood it wrongly in the past, so let me explain it again: I don't want extra instructions that improve the speed of zip expansion. I don't want software that expands the executable programme. I want standalone hardware (hard-wired or nearly) that expands the programme on-the-fly during transfers from the Ram to the slowest cache, or even later, between the caches. This puts strong limitations on the nature of the compression, which I suppose must still be invented, in direct connection with said hardware. Marc Schaefer, aka Enthalpy
  8. No. They checked the ABSENCE of X-ray emission where the source of gravitational lensing was. Don't suppose scientists are stubborn or stupid: both would be incompatible with science.
  9. A good collision means that the interaction between the protons represents a significant proportion of the kinetic energy. Imagine you want the electrostatic repulsion to equal 7TeV (though interesting effects happen before): the protons must be closer than 2*10-22 m. This is smaller than a proton, logically enough, as the LHC investigates sub-proton particle physics, so you should rather ask if the quarks are well aligned, not the protons. The beams have 16µm diameter at the collision points, which is an achievement, but very far from 10-22 m, meaning that as many protons as possible, as concentrated in volume as possible, repeated over many bunches, are necessary to get real collisions. The LHC improves all this a lot, not just the energy. From the direction of the collision products, experimenters know exactly the "impact parameter" - how aligned the protons were. More generally: don't suppose people working at the LHC - not elsewhere - are stupid. This would mislead you into wrong conclusions.
  10. The limited speed of information is a basic hypothesis of Relativity that allows it to explain why Lorentz' transformation makes sense. It's not a remote consequence of increasing mass. If you achieve data transfer faster than light, Relativity is lost. Entropy is related with the direction of time. This is the main link I see with causality.
  11. In some special - but realistic - cases graphs exist. If you expand the gas at reasonable speed it won't exchange heat with the outside world, and then the maximum available work (losses exist) is the gas' change of enthalpy. Then, at moderate gas density, the enthalpy (H) depends just on the gas' temperature (T), so diagrams exist, as simple as H versus T. Or rather, H versus T at many different pressures - but this isn't very useful because pressure changes when the gas is exploited. More commonly, you get diagrams at constant entropy. Or as well, enthalpy versus entropy, very common. Thermodynamics is a bit abstract but interesting. It's one huge basis of physics, but regrettably many physicists are uneasy with it, so learning it would give you an advantage. I suppose beginning with perfect gas, followed by real gas, by vapour and liquid, and only later generalizing to any system is the best possible approach, so just go on like this. Most books and courses begin with the general case and give gas as a special situation - then it's very abstract and offputting.
  12. Rotations are absolute. You can detect them through the Coriolis force. You can distinguish them from a gravitation field if you compare the movement of free masses at different positions. Put a mass nearer to the star than the planet is, it moves forward; put an other one farther, it moves backwards. This tells you the planet orbits the star.
  13. Within the broad ionization provided by a cosmic ray, the bolt finds its narrow way. A discharge concentrates naturally, unless a very low gas pressure prevents it. And as cosmic rays have all directions (except through Earth) they can seed a horizontal bolt as well. Cosmic rays need over a few GeV to reach the ground; at 10eV per carrier pair, it makes several 100 million carriers on the track. I saw only one ray per second in a detector at the Expo'92 but Wiki suggests 10,000/m2/s at 1GeV which looks enough to build a path. I mistrust the linked explanation where the electric field accelerates lucky electrons. While electrons at 100MeV (since such gamma energies are observed) lose about 0.5MeV/m in air hence can accelerate in a 2MV/m breakdown field, they would still require 100m straight path to get 200MeV. As well, the seeds for hot electrons would need to have >10keV to accelerate further, or >5mm without a collision, and both are impossible from luck at normal pressure and temperature. Electron stopping power of air: http://physics.nist....Text/ESTAR.html
  14. The following idea is wild speculation because I know little about lightning, so it probably deserves its legitimate place in the crank ideas subforum. You have been warned. ----- In the past years, bursts of X and gamma rays were observed during lightning strikes. This is a surprise and explanations are difficult because energy in a bolt isn't concentrated enough, so to say, to produce such photons: the temperature in the bolt is too low for thermal radiation of X and gamma, and the energy of electrons is too small as well - even if air gets a smaller density in the bolt, even if some electrons travel over a longer distance than the average between collisions. Several explanation attemps are under way, invoking very dynamic processes, and none has yet succeeded as far as I ignore. ----- My proposal is that the lightning path and the gammas both result from a cosmic ray. Cosmic rays are abundent: at ground level, a wire chamber detector of 2m*1m*0.5m sees over one per second. Over its length, a lightning strike crosses the paths of several cosmic rays, which conduct electricity better than unexcited air does, and since the lightning voltage builds up slowly, the bolt can await a favourable triggering by a set of cosmic rays paths. This isn't new: http://en.wikipedia....le_in_lightning And then, the X and gamma detectors would just catch the photons produced by the cosmic ray nearest to the ground, on the path created for the bolt. ----- How to test it? For instance, put several X or gamma detectors on the ground, check if the detector nearest to the impact gets photons. Then, attract the bolt with a vertical fast jet of hot or ionized air for instance, check which detector on the ground gets photons: the one near the jet or the one aligned with the direction of the bolt's previous segment. Marc Schaefer, aka Enthalpy --------------------------------------------------------------------------------------------------------------------------------------------------------- The test with the direction of the gamma won't be easy because a cosmic ray shower is very broad when it reaches the ground... But since a bolt is rather slow for our present technology, we could check if the gamma rays precede the bolt. Or if other particles from a shower are present.
  15. Biomass presently needs a huge area to produce a limited energy, alas. Research seeks still unused organisms to produce more energy from less area or area less useful to Mankind, like the seabed. Some people suggest micro-organisms like unicellular algae would convert light into more energy than the species we chose and optimized for food production like maize. For sure, replicating only the plants' photochemical reaction, not the full plant nor cellula, looks more efficient as a promise. To my very limited knowledge, this hasn't succeeded up to now because our understanding of photosynthesis suffices for a vague explanation, nor for a working engineering. So the general idea exists, but the practical way needs your better ideas. Beware competition is hard.
  16. 1. This is a P vs T graph which doesn't tell how much work you can extract - there are some indirect links. For water vapour up to a few ten bar, you can take P (atm) = [T (°C) / 100]^4. A & B. 250 psi is realistic. Technology uses commonly hydraulic pressure of 210b, 350b, sometimes 700b, uncommonly 1500b. This pressure is stored in a gas, nitrogen. Double the pressure less than doubles the maximum available work, unless the pressure increase over 1 atm is very small. B plus. If you pass a liquid through an engine, the maximum available work is volume * pressure (or rather pressure drop, and only as long as this keeps constant). But then the work is obtained from a gas that pushes the liquid: it's not extracted from an energy stored in the liquid itself. In thermodynamics, the liquid keeps its internal energy; its enthalpy is converted to work. Interestingly, enthalpy is defined from the liquid's state, and suffices to compute the available work, but the work comes from an energy stored in part elsewhere, here it was fully in the gas. B more. When expanding a gas, it colds down. The work you can extract depends on if you let it cold down naturally, or inject heat in it so, for instance, its temperature keeps constant - or any other situation. So no direct relation exists between pressure and work - only for some simple cases. If no heat is exchanged, the maximum work is the change of enthalpy; if temperature is constant, it's the change of internal energy. B chatter. Internal energy and enthalpy are strongly linked with the gas' temperature, rather than pressure. In a "perfect gas" (which a real gas may resemble if its density is far less than the liquid's density), internal energy and enthalpy depend only on the temperature. C. Yes. D. Only if the temperature is the same in both cases, and only if density is far less than the liquid's density, the mass stored is propotional to pressure. This mass converts into a volume of air taken at normal atmospheric conditions. E. As a metric integrist I feel imperial units useless in any situation. F. Gas are more complicated because they have a temperature, a pressure and a volume, where one property can be deduced from the two others: ONLY for perfect gas, it's simply P*V = n*R*T with sensible units, where n is number of moles (29g each for air) and R = 8.3145 J/mol/K. Since all three can vary when you expand the gas, you must know more about how they vary during the process. For instance if no heat is exchanged, then P*V^gamma is constant, with gamma~1.4 for air. F more. At the pressure you cite, and reasonable temperatures, air is nearly a perfect gas with gamma~1.4, and then its enthalpy is 3.5 * R * T (K) in Joules per mole of 29g, or nearly 1kJ/kg/K. With no heat exchange, the available work is the change of enthalpy, and the ratio of pressure and the ratio of volume equal the ratio of temperature high 3.5 and 2.5 respectively - put them properly so pressure increases the temperature and decreases the volume. Then the maximum work can be computed with a limited effort. In more complicated cases you need experimental tables. From the power 2.5 and 3.5 you see the relationship with pressure won't be linear. But the tank's volume is, as it defines how many moles you store, since P and T define how much volume each mole takes in the tank.
  17. Listen to a better source then.
  18. You should make a very clear mental difference between science and pseudo-science as in some links you give.
  19. France is said to have developed in Saint-Louis such a miniature EMP weapon that uses controlled switches and fits in a suitcase. It was described in a "Quarks &Co" emission by the WDR channel - or was it "Nano" on 3Sat channel? Such a weapon would have ~100m range, so for military use it should be brought near the target by some drone. But for use by spooks, it's a perfect range. The Italian towns that seem to have been targeted are Aoste on 20th January 2005, and Canneto some months before. What they observed there, with all electronic devices connected to wires being destroyed, strongly suggests an EMP weapon.
  20. Why do you feel the need to give some sort of explanation, just as if Nature had to obey to your understanding? The absence of a di-neutron is still an open question within the existing theories. Same for the di-neutron.
  21. No. Unless you prove with sensible engineering figures it can be done. Neither. This won't stop rays. No man-assembled biosphere has ever worked. If you achieve this just on Earth, just with plants if you feel it simpler, you'll be famous.
  22. Fission with two nucleons, that would be new. Fusion between two neutrons has never been seen and this still needs an explanation. Between two protons, it never produces a di-proton (whose mass is hence unknown), which also remains to be explained. Mass is energy whatever the form of energy: it can be speed, chemical energy, height, heat... In a dam, water loses mass which is converted into electricity in the turbine and the alternator. In fusion, the strong force releases energy available for instance in a neutron's speed, while in fission, electrostatic repulsion releases energy available in the fragments' speed. Because nuclear energy is more concentrated, the mass variation is more perceptible and such figures get practically usable. That's all about mass variation. In a collision at higher energy like at the LHC, nearly anything can happen (mainly particles production), and the result depends much on the collision energy and very little on what the initial particles were.
  23. The only difficulty is that silicon can make bizarre contacts with the wire and can be depleted - including at its surface under some circumstances - in which case it will act as an insulator with Er~12 instead of a conductor. Then, the silicon area would bring a smaller capacitance than a metal. If the frequency is high enough, silicon's resistance (which depends fundamentally on its doping, over ten magnitudes) can add in series with the capacitance, and be itself shunted by silicon's internal capacitance. Then the resulting capacitance would again be smaller, and you could define it only if telling what equivalent circuit you choose, basically series or parallel. About the field : the DC field cannot be defined with so little information, because it depends on silicon's doping, and - much worse - on the cleanliness of its surface, which is essentially unknown and poorly understood. But capacitance is normally defined for AC fields, which superimpose to the DC field, and don't depend on these complications if all dopings permit silicon to act as a resonable conductor.
  24. It's no that complicated... Pressure, deformation, electric field, and charge are linked together. If there were no piezoelectricity, in an insulator: - pressure and deformation are linked by the elasticity - electric field and charge (or rather, polarization) are linked by the permittivity Piezoelectricity adds a coupling between mechanics and electricity, for instance an electric field can induce a pressure. And since a deformation as well can induce a pressure, and a charge, etc, you get a relation between all four values. In a crystal, this relation uses to be very linear, so proportionality factors suffice. But piezoelectric crystals are not (can't be) isotropic, which is the very reason why we need more factors, which are written as a tensor. For instance, a shear can induce an electric field - but a compression can as well, and this depends on the direction of your stress relative to the crystal's axis. It also depends on how you chose the mathematical axis, but all people who survived took them in a meaningful way relative to the crystal. Now, without a proper math theory, and if we knew nothing about individual materials, the tensor could have 3^4=27 independant coefficients, but because of symmetries, far fewer coefficients are needed, and use to be re-numbered with simpler subscripts which just tell "compression" or "shear" or "effect parallel to cause". ----- Ferroelectric materials, of which PVDF is best known (BaTiO3 would be an other one), are very similar to piezoelectric ones and are often called piezoelectric. They can be isotropic initially, but a first big electric field orders them in one direction, and this persists until the next big field or heat is applied. Once they're formed, they get anisotropic, and behave as piezoelectric materials. Very interesting and useful because, as a plastic, PVDF deforms much more than a ceramic, hence is more efficient. Its acoustic impedance also matches water and human body better. PVDF is a zigzag of CH2 and CF2 alternately - this alternance must be accurate over many atoms to work. The initial polarization puts many macromolecules in the same orientation, say with fluor up and hydrogen down, which makes the plastic polarized and sensitive to fields and deformations. Because field and polarization deform PVDF a lot, it's more important to check if Young's E-modulus is defined in open or short circuit, permittivity at zero force or zero deformation, and so on. The effect is important enough that useful mechanical damping is obtained just by putting a resistor across a part of PVDF - this has been proposed to dampen turbulence on aircraft and boat parts. Some more explanations in Wiki, and if you're lucky, at companies that build components of these materials, like Murata and TDK.
  25. Oops, maybe I misunderstood the initial query of this discussion, indeed. OP, could you give more details about "suspend between electromagnets without actually touching them"? Do you mean: levitate? Or suspend them under a wire and observe the effect of the magnets?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.