Why -2 .(-3) = 6

Why is it positive?

Why we accept that (minus sign) times (minus sign) is positive ? What is its origin?

Thanks in advance

]]>

\begin{equation}

\tan{(2X-\phi_0)} = \frac{\rho\sin{\phi-a\sin{X}}}{\rho\cos{\phi}-a\cos{X}}

\end{equation}

Thank you very much! ]]>

]]>

When you have square root of (-2)^10 which is the result?

Because if I use the fraction notation results (-2)^(10/2) = (-2)^5 .... a negative number.

On the other hand, if the power is calculated first the result is positive.

Thanks in advance

]]>

Anyways, the engine showed there exists no soln . for this.

I tried this on few others and still got the same. I can't understand this.

-1 and 1 both give 1 when multiplied by itself. No square of a number can be in negative form.

So sqrt(x)=-1 should have the solution as 1.

Can anyone explain this.

Also, I searched this on certain sites and they explained with graphs of complex numbers containing parabola , hyperbola etc. I have not learnt calculus yet.

If anyone can explain this without calculus elaborately , I will be grateful

Best wishes

]]>We consider sums

αi=e+yi;i=1,2…N

(1)Now each alpha_i=e+y_i belongs to V-W. We prove it as follows

If possible let alpha_i belong to W. We have

e=αi−yi=αi+(−yi)

(2)
Both alpha_i and -y_i belong to W. Therefore their sum ‘e’ should belong to W . This contradicts our postulate that e belongs to V-W. Therefore each alpha_i belongs to V-W.

Next we consider the equation

Σi(ciαi)=0

(3)

⇒Σi=Ni=1(ci(e+yi))=0

⇒eΣi=Ni=1ci=−Σi=Ni=1ciyi

(4)The right side of (4) belongs to W while the left side belongs to V-W. If the left side belonged to W them (1/Sigma c_i)(Sigma c_i)e=e would belong to W which is not the case. |The right side being a linear combination of vectors from W belongs to W.The only solution to avoid this predicament would be to assume Sigma c_i on the left side of (4) to be zero: that each side of (4) represents the null vector. We cannot have all c_i=0 [individually]in an exclusive manner since that would make the space N dimensional , in view of (3), [N is much greater than n, the dimension of the parent vector space V].

Equations

Σi=Ni=1ci=0

(3.1)

Σi=Ni=1ciyi=0

(3.2)From (3.1)

cN=−c1−c2−c3…..−cN−1

(4)Considering (3.2) with (4) we have,

yN=c1c1+c2+c3…..+cN−1y1+c2c1+c2+c3…..+cN−1y2+…..+cN−1c1+c2+c3…..+cN−1yN−1

(5.1)

yN=a1y1+a2y2+……aN−1yN−1

(5.2)Where,

ai=cic1+c2+…+cN−1

(5.3)From (5.3) we have the identity

a1+a2+…+aN−1=1

(6)But the N(>>N) vectors were chosen arbitrarily. Equation (5.2) should not come under the constraint of equation (6).We could have chosen y_N in the form of (5.2) in a manner that (6) is violated. ]]>

See attached file

I want to differentiate it with respect to X to get the minimum peak

.

Thanks for help ]]>

https://drive.google.com/file/d/10z63Xidgs3m8p04_C6ZiGh-8Q6KTwPsh/view?usp=sharing

Incidentally I tried the Latex with the code button.But I am not getting the correct preview.

Example

\begin{equation}\bar{A}^{\mu\nu}=\frac {\partial \bar{x}^{\mu}}{\partial x^{\alpha}}\frac {\partial \bar{x}^{\nu}}{\partial x^{\beta}}A^{\alpha \beta}\end {equation}

\[{\ bar{A}}^{\mu\nu}=\frac {\partial {\bar{x}}^{\mu}}{\partial x^{\alpha}}\ frac{\partial {\bar{x}}^{\nu}}{\partial x^{\beta}}\]

Currently for a long time I am not a frequent user of Latex thanks to the equation bar of MS Word. But I do appreciate,like many others, the application of Latex in various forums.Help is being requested from the forum regarding Latex.Thanking in advance for help provided..

]]>

Does that mean we can assume $\exists x \in \mathbb{R}(x \neq x)$?

If so, would this provide us with the basis for a field with one element?

]]>First, I’ll define what I mean by a “large” prime factor. Let N be a number. If a prime factor of N is greater than the square root of N, then that factor is a large prime factor of N.

As an example, 11 is a large prime factor of 22, because 11 is greater than the square root of 22, and so 22 has a large prime factor

On the other hand, 3 is not a large prime factor of 12 because 3 is less than the square root of 12, and so 12 does not have a large prime factor.

Below is a list of composite numbers with large prime factors:

6, 10, 14, 15, 20, 21, 22, 26, 28, 33, 34, 35, 38, 39, 42, 44, 46, 51, 52, 55, 57, 58, 62, 65, 66, 68, 69, 74, …

It seems that, as numbers increase, a greater and greater percentage of them have large prime factors. I say that that seems to be true, because I have sampled some groups of big numbers, and most of them had large prime factors. Of course, that isn’t proof and as far as I know it could also be wrong.

If we check all of the numbers up to 330, the majority of counting numbers are composite numbers with large prime factors.

If I understand it correctly, then what I’m asking about is similar to the question answered by the Prime Number Theorem. According to the Prime Number Theorem, for a very large number N, the probability that a random integer not greater than N is prime is equal to 1/log(N).

Because the prime numbers are distributed in this way, and 1/log(N) can be arbitrarily close to zero, the composite numbers can be seen as essentially the same as all integers, for very large values of N. For very large numbers, my question is the same as asking what percentage of all integers have a large prime factor.

My question is, “For a very large number N, what is the probability that a random integer less than N has a large prime factor?” “Is this probability greater than 0.5?” I’m hoping there might be some kind of answer to this in the same way that the Prime Number Theorem answers the question about the distribution of prime numbers.

]]>
all a,b ∈G . Show that H = { g^2|g ∈G } is

a normal subgroup of G.

]]>Define

T:V→V,f↦df/dx.

How do I prove that T is a linear transformation?

(I can do this with numbers but the trig is throwing me). ]]>

Show that [math] {Ax = v_{0}}[/math] has no solution.

I know [math] v_{0}[/math] is an eigenvector of A with eigenvalue 0, and the other eigenvectors do not have 0 eigenvalues.

So,

[math] {Av_0= \lambda_{0} v_{0}}[/math]

[math] {Av_0= 0 v_{0}}[/math]

[math] {Av_0= 0}[/math]

So [math] {v_0}[/math] "is" the null space of A (since no other eigenvectors have eigenvalues of 0).

So the question is asking me to prove there is no vector that when operated on by A gets to the null space.

I can't think of how to prove this though, apart from saying "A operating on x can only give a vector that is 0 or in the column space"

]]>
This is what I have now.

$$

\begin{array}{l}

\mathrm{W}^{\perp}=\{p \in P 2 |\langle p, x+1\rangle=0\} \\

\langle p x+1\rangle=p(-1) x+1(-1)+p(0) x+1(0)+p(1) x+1(1)=p(-1) \\

(-1+1)+p(0)(0+1)+p(1)(1+1)=p(0)+2 p(1)=2 \mathrm{p}(1)

\end{array}

$$

since we are looking for polynomials such that $\mathrm{p}(0)=2 \mathrm{p}(1),$ and with the definition of $\mathrm{P}^1$ all

polynomials $a x^2+b x+c$ such that $c=2(a+b+c),$ so the numbers a,b,c with 2a+2b+c=0. In terms of linear algebra and the null space of $A=[2,2,1]$ which is dimension 2 and generates the vectors

$\begin{bmatrix}1\\0\\-2\end{bmatrix}$ and $\begin{bmatrix}0\\1\\-2\end{bmatrix}$

Which converts back into polynomials to get

$W^{\perp}=\left\{\mathrm{x}^2-2, \mathrm{x}-2\right\}$

Did I solve this question correctly?

I'm trying to proof this:

Proof det(AB)=0 where Amxn and Bnxm with m>n

I created a generic matrix A and B, then I use Laplace transforms to conclude.

I'd like to know if there is another way to proof that.

Thanks! ]]>

And the transformation to its second derivatives.

Shows the transformation and its matrix em relate to the base of this space.

Could anybody help me?

Thanks! ]]>

Why is this so?

I can understand that both are vector spaces and so "qualify" on that account but are they uniquely qualified to be Duals of each other ?

Is the fact that they have a vector in common (the point p on the surface) important*?

Can the Tangent space be Dual to any other vector spaces or is the Cotangent Space the only possibility?

*important in making them Dual Spaces

]]>
Each pixel contains 256 * 256 mosaic information, resulting in an image resolution of 2^{16} (65 536).

Is it possible to solve for x and y.

Since xy and yx are equal to b and c and are not the same, the elements in the matrix P are outer product of two matrices x and y.

Is it possible to solve for x and y such that xy and yx can be made predictable.

Kindly let me know if I am not clear.

Thank you for the kind help.

]]>I need urgent help regarding the following question.

Any help will be greatly appreciated.

Thank you!

]]>

**k _{0}∑x_{j}^{1} + k_{1}∑x_{j}^{2} + k_{2}∑x_{j}^{3} +,…,+ k_{n}∑x_{j}^{n+1} = ∑y_{j}x_{j}^{1}**

**k _{0}∑x_{j}^{2} + k_{1}∑x_{j}^{3} + k_{2}∑x_{j}^{4} +,…,+ k_{n}∑x_{j}^{n+2} = ∑y_{j}x_{j}^{2}**

** .**

** .****k _{0}∑x_{j}^{n} + k_{1}∑x_{j}^{n+1} + k_{2}∑x_{j}^{n+2} +,…,+ k_{n}∑x_{j}^{n+n} = ∑y_{j}x_{j}^{n}**

One of the most beautiful algebra formulas is the least squares polynomial formula.

]]>