A prime counting function I have never seen

Intermediate result:

Let P(x)=k=1[x](1+2k)k/2 P(x)=\prod_{k=1}^{[x]}\left(1+\dfrac{2}{k}\right)^{k/2} and L(x)=logP(x). L(x)=\log P(x). Then (deduction follows when I finish my work or get stuck)
L(x)=log(3)2+log(2)+[x]12H[x]+n=3(1)n+12n1n(ζ(n1,3)ζ(n1,[x]+1)) L(x)=\dfrac{\log(3)}{2}+\log(2)+[x]-\dfrac{1}{2}-H_{[x]}+\sum_{n=3}^\infty (-1)^{n+1}\dfrac{2^{n-1}}{n}\left(\zeta(n-1,3)-\zeta(n-1,[x]+1)\right) I needed n1>1 n-1>1 to use the Hurwitz zeta-function, i.e. n3. n\ge 3.

This isn't likely the "simplification" you spoke about, and it will get worse when I replace the Hurwitz zeta-function by its integral expression and the harmonic number by H[x]=log[x]+γ+O(1/[x]) H_{[x]}=\log [x] +\gamma +\mathcal{O}(1/[x]) with the Euler-macaroni, sorry, Euler-Mascharoni constant γ \gamma or by an integral, too.
Damn it. My video is going to only be like 2 mins long if you keep debunking my stuff lol. It’s all good though. I will also show my mistakes. My videos are more about the journey. Not the destination. Just trying to find like minded people who want to have fun with math and try new things.
 
Here is what I have done. I'm not sure whether this pleases you. It's a little bit complicated.

1) I defined the function
P(x)=k=1[x](1+1k/2)k/2  . \displaystyle{P(x)=\prod_{k=1}^{[x]}\left(1+\dfrac{1}{k/2}\right)^{k/2}} \;.
2) Your claim is thus about P(14) P(14) and it says P(14)π(e15)π(e14) P(14)\sim\pi\left(e^{15}\right)-\pi\left(e^{14}\right) where π \pi is the prime counting function, i.e. in other words P(14)141738. P(14)\sim 141738.
3) You wrote, and I corrected two typos and added the more accurate approximation with the constant 1.08366 1.08366 in the denominator:
The amount of primes between e14e^{14} and e15e^{15} is 141738.141738. Using x/log(x)x/\log(x) we get 132034.132034. P(14)=135839.07337861.P(14)=135839.07337861. Using x/(logx1.08366)x/(\log x -1.08366) we get 141798.141798. For further steps, we will use the logarithm of P(x)P(x) because sums are easier to handle.

4) Comment: the approximation with 1.08366 1.08366 is hard to beat. It has only an error of 60 60 or 0.04% 0.04 \%

5) Anyway. Here is what I have done with L(x)=logP(x). L(x)=\log P(x).

L(x)=log(P(x))=k=1[x]k2log(1+2k)=log(3)2+log(2)+k=3[x]k2log(1+2k)=log(3)2+log(2)+k=3[x]k2n=1(1)n+12nnkn=log(3)2+log(2)+k=3[x](1+n=2(1)n+12n1nkn1)=log(3)2+log(2)+([x]2)+k=3[x](1k+n=3(1)n+12n1nkn1)=log(3)2+log(2)+([x]2)k=3[x]1k+k=3[x]n=3(1)n+12n1nkn1=log(3)2+log(2)+[x]12k=1[x]1k+n=3(1)n12n1nk=3[x]1kn1\begin{array}{lll} L(x)&=\log(P(x))=\displaystyle{ \sum_{k=1}^{[x]}\frac{k}{2}\log\left(1+\dfrac{2}{k}\right) }\\[18pt] &=\displaystyle{ \dfrac{\log(3)}{2}+\log(2)+\sum_{k=3}^{[x]}\frac{k}{2}\log\left(1+\dfrac{2}{k}\right) } \\[18pt] &=\displaystyle{ \dfrac{\log(3)}{2} +\log(2)+\sum_{k=3}^{[x]}\frac{k}{2}\sum_{n=1}^\infty (-1)^{n+1}\dfrac{2^n}{nk^n} } \\[18pt] &=\displaystyle{ \dfrac{\log(3)}{2}+\log(2)+ \sum_{k=3}^{[x]}\left(1+\sum_{n=2}^\infty(-1)^{n+1}\dfrac{2^{n-1}}{nk^{n-1}}\right) } \\[18pt] &=\displaystyle{ \dfrac{\log(3)}{2}+\log(2)+([x]-2)+\sum_{k=3}^{[x]}\left(-\dfrac{1}{k}+\sum_{n=3}^\infty(-1)^{n+1}\dfrac{2^{n-1}}{nk^{n-1}}\right) } \\[18pt] &=\displaystyle{ \dfrac{\log(3)}{2}+\log(2)+([x]-2)-\sum_{k=3}^{[x]}\dfrac{1}{k}+\sum_{k=3}^{[x]}\sum_{n=3}^\infty(-1)^{n+1}\dfrac{2^{n-1}}{nk^{n-1}} } \\[18pt] &=\displaystyle{ \dfrac{\log(3)}{2}+\log(2)+[x]-\dfrac{1}{2}- \sum_{k=1}^{[x]}\dfrac{1}{k}+\sum_{n=3}^\infty (-1)^{n-1}\dfrac{2^{n-1}}{n}\sum_{k=3}^{[x]}\dfrac{1}{k^{n-1}} } \end{array}
We have for the harmonic series the formulas

k=1[x]1k=H[x]=011t[x]1tdt=log[x]+γ+12[x]+112[x]2+1120[x]41252[x]6+1240[x]81132[x]10+O(1[x]12)\begin{array}{lll} \displaystyle{ \sum_{k=1}^{[x]}\dfrac{1}{k} } &=H_{[x]}= \displaystyle{ \int_0^1\dfrac{1-t^{[x]}}{1-t}\,dt } \\[18pt] &=\log [x] +\gamma + \dfrac{1}{2[x]}+\ldots \\[18pt] &\ldots -\dfrac{1}{12[x]^2}+\dfrac{1}{120[x]^4}-\dfrac{1}{252[x]^6}+\dfrac{1}{240[x]^8}-\dfrac{1}{132[x]^{10}}+\mathcal{O}\left(\dfrac{1}{[x]^{12}} \right) \end{array}
and for the Hurwitz zeta-function

ζ(s,q)=k=01(q+k)s=1Γ(s)0ts1eqt1etdt,q>0,s>1 \zeta(s,q)=\sum_{k=0}^\infty \dfrac{1}{(q+k)^{s}}=\dfrac{1}{\Gamma(s)}\int_0^\infty \dfrac{t^{s-1}e^{-qt}}{1-e^{-t}}\,dt\, , \,q>0\, , \,s>1
the formulas

k=3[x]1kn1=k=0[x]31(3+k)n1=k=01(3+k)n1k=[x]21(3+k)n1=ζ(n1,3)m=01([x]+1+m)n1=ζ(n1,3)ζ(n1,[x]+1)=1(n2)!0tn2(e3te([x]+1)t)1etdt\begin{array}{lll} \displaystyle{ \sum_{k=3}^{[x]}\dfrac{1}{k^{n-1}} } &=\displaystyle{ \sum_{k=0}^{[x]-3}\dfrac{1}{(3+k)^{n-1}}=\sum_{k=0}^{\infty }\dfrac{1}{(3+k)^{n-1}}-\sum_{k={[x]-2}}^{\infty }\dfrac{1}{(3+k)^{n-1}} } \\[18pt] &=\displaystyle{ \zeta(n-1,3)-\sum_{m=0}^\infty \dfrac{1}{([x]+1+m)^{n-1}}=\zeta(n-1,3)-\zeta(n-1,[x]+1) } \\[18pt] &=\displaystyle{ \dfrac{1}{(n-2)!}\int_0^\infty \dfrac{t^{n-2}\left(e^{-3t}-e^{-([x]+1)t}\right)}{1-e^{-t}} } \,dt \end{array}
That makes in total

L(x)=log(3)2+log(2)+[x]12011t[x]1tdt++0e3te([x]+1)t1et(n=3(1)n12n1n(n2)!tn2)dt\begin{array}{lll} L(x)&=\displaystyle{ \dfrac{\log(3)}{2}+\log(2)+[x]-\dfrac{1}{2}-\int_0^1\dfrac{1-t^{[x]}}{1-t}\,dt +\ldots } \\[18pt] &\displaystyle{ \ldots+\int_0^\infty \dfrac{e^{-3t}-e^{-([x]+1)t}}{1-e^{-t}}\left(\sum_{n=3}^\infty (-1)^{n-1}\dfrac{2^{n-1}}{n(n-2)!} t^{n-2}\right) \,dt } \end{array}
Now (using WA) we get

n=3(1)n12n1n(n2)!tn2=2t21+2te2t+e2t2t2 \displaystyle{ \sum_{n=3}^\infty (-1)^{n-1}\dfrac{2^{n-1}}{n(n-2)!} t^{n-2}= \dfrac{ 2t^2 - 1 + 2te^{-2 t} + e^{-2 t}}{2 t^2} }
and thus

I[x](t)=e3te([x]+1)t1et(n=3(1)n12n1n(n2)!tn2)dt=e3te([x]+1)t1et  2t21+2te2t+e2t2t2\begin{array}{lll} I_{[x]}(t)&= \displaystyle{ \dfrac{e^{-3t}-e^{-([x]+1)t}}{1-e^{-t}}\left(\sum_{n=3}^\infty (-1)^{n-1}\dfrac{2^{n-1}}{n(n-2)!} t^{n-2}\right) \,dt } \\[18pt] &=\displaystyle{ \dfrac{e^{-3t}-e^{-([x]+1)t}}{1-e^{-t}}\ \cdot\ \dfrac{ 2t^2 - 1 + 2te^{-2 t} + e^{-2 t}}{2 t^2} } \end{array}
and finally

L(x)=log(23e)+[x]H[x]+0I[x](t)dt  .\begin{array}{lll} L(x)&=\displaystyle{ \log\left(\dfrac{2\sqrt{3}}{\sqrt{e}}\right)+[x]-H_{[x]}+\int_0^\infty I_{[x]}(t)\,dt\;. } \end{array}
Let us test our result with x=14. x=14.

L(14)=logP(14)=log(135839.07337861)11.81922618H(14)=11717333603600I14(t)dt0.328335L(14)13.5+log(23)1171733360360+0.32833511.819226\begin{array}{lll} L(14)&=\log P(14)=\log (135839.07337861) \approx 11.81922618\\[18pt] H(14)&=\dfrac{1171733}{360360}\\[18pt] \displaystyle{ \int_0^\infty I_{14}(t)\, dt } &\approx 0.328335\\[18pt] L(14)&\approx 13.5+\log(2\sqrt{3})-\dfrac{1171733}{360360}+0.328335 \approx 11.819226 \end{array}
There is a small error caused by the approximation of the integral by WA.

6) At least you have a nice formula and you could test values for x x other than 14. 14. I think they get better with increasing x x but not as good as Legendre's approximation.

7) So much to: "within minutes". Nice constant by the way.

Here is the WA link for the integral:
That should save you a lot of typing. You only have to replace the 14 14 by another number for x. x.
 
Last edited:
I have checked the numbers for P(20) P(20) and the number of primes between e21 e^{21} and e20 e^{20} and compared them with the numbers for P(14) P(14) and the number of primes between e15 e^{15} and e14 e^{14} and both with Legendre's approximation. I have found:

Δabs(14)=π(e15)π(e14)[P(14)]=5899Δrel(14)=Δabs(14)π(e15)π(e14)4,162%ΔabsL(14)=π(e15)π(e14)[e15151,08366]+[e14141,08366]=141738234905+93107=60ΔrelL(14)=ΔabsL(14)π(e15)π(e14)0,0423%Δabs(20)=π(e21)π(e20)[P(20)]=774.075Δrel(20)=Δabs(20)π(e21)π(e20)1,911%ΔabsL(20)=π(e21)π(e20)[e21211,08366]+[e20201,08366]=4051021566217776+25647942=59619ΔrelL(20)=ΔabsL(20)π(e21)π(e20)0,147%\begin{array}{lll} \Delta_{abs}(14)&=\pi\left(e^{15}\right)-\pi\left(e^{14}\right)-[P(14)]=5899\\[12pt] \Delta_{rel}(14)&=\dfrac{\Delta_{abs}(14)}{\pi\left(e^{15}\right)-\pi\left(e^{14}\right)}\approx 4,162\,\%\\[36pt] \Delta^L_{abs}(14)&=\pi\left(e^{15}\right)-\pi\left(e^{14}\right)-\left[\dfrac{e^{15}}{15-1,08366}\right]+\left[\dfrac{e^{14}}{14-1,08366}\right]\\[12pt] &= 141738-234905+93107=-60\\[12pt] \Delta^L_{rel}(14)&=\dfrac{\Delta^L_{abs}(14)}{\pi\left(e^{15}\right)-\pi\left(e^{14}\right)}\approx -0,0423\, \% \\[36pt] \Delta_{abs}(20)&=\pi\left(e^{21}\right)-\pi\left(e^{20}\right)-[P(20)]=774.075\\[12pt] \Delta_{rel}(20)&=\dfrac{\Delta_{abs}(20)}{\pi\left(e^{21}\right)-\pi\left(e^{20}\right)}\approx 1,911\,\%\\[36pt] \Delta^L_{abs}(20)&=\pi\left(e^{21}\right)-\pi\left(e^{20}\right)-\left[\dfrac{e^{21}}{21-1,08366}\right]+\left[\dfrac{e^{20}}{20-1,08366}\right]\\[12pt] &= 40510215 - 66217776 + 25647942 = -59619\\[12pt] \Delta^L_{rel}(20)&=\dfrac{\Delta^L_{abs}(20)}{\pi\left(e^{21}\right)-\pi\left(e^{20}\right)}\approx -0,147\,\% \end{array}
This does not prove my claim, but gives evidence that your formula gets better with higher numbers, and that Legendre is unbeatable.

P.S.: The game you're playing is contagious. 8-)
 
I have checked the numbers for P(20) P(20) and the number of primes between e21 e^{21} and e20 e^{20} and compared them with the numbers for P(14) P(14) and the number of primes between e15 e^{15} and e14 e^{14} and both with Legendre's approximation. I have found:

Δabs(14)=π(e15)π(e14)[P(14)]=5899Δrel(14)=Δabs(14)π(e15)π(e14)4,162%ΔabsL(14)=π(e15)π(e14)[e15151,08366]+[e14141,08366]=141738234905+93107=60ΔrelL(14)=ΔabsL(14)π(e15)π(e14)0,0423%Δabs(20)=π(e21)π(e20)[P(20)]=774.075Δrel(20)=Δabs(20)π(e21)π(e20)1,911%ΔabsL(20)=π(e21)π(e20)[e21211,08366]+[e20201,08366]=4051021566217776+25647942=59619ΔrelL(20)=ΔabsL(20)π(e21)π(e20)0,147%\begin{array}{lll} \Delta_{abs}(14)&=\pi\left(e^{15}\right)-\pi\left(e^{14}\right)-[P(14)]=5899\\[12pt] \Delta_{rel}(14)&=\dfrac{\Delta_{abs}(14)}{\pi\left(e^{15}\right)-\pi\left(e^{14}\right)}\approx 4,162\,\%\\[36pt] \Delta^L_{abs}(14)&=\pi\left(e^{15}\right)-\pi\left(e^{14}\right)-\left[\dfrac{e^{15}}{15-1,08366}\right]+\left[\dfrac{e^{14}}{14-1,08366}\right]\\[12pt] &= 141738-234905+93107=-60\\[12pt] \Delta^L_{rel}(14)&=\dfrac{\Delta^L_{abs}(14)}{\pi\left(e^{15}\right)-\pi\left(e^{14}\right)}\approx -0,0423\, \% \\[36pt] \Delta_{abs}(20)&=\pi\left(e^{21}\right)-\pi\left(e^{20}\right)-[P(20)]=774.075\\[12pt] \Delta_{rel}(20)&=\dfrac{\Delta_{abs}(20)}{\pi\left(e^{21}\right)-\pi\left(e^{20}\right)}\approx 1,911\,\%\\[36pt] \Delta^L_{abs}(20)&=\pi\left(e^{21}\right)-\pi\left(e^{20}\right)-\left[\dfrac{e^{21}}{21-1,08366}\right]+\left[\dfrac{e^{20}}{20-1,08366}\right]\\[12pt] &= 40510215 - 66217776 + 25647942 = -59619\\[12pt] \Delta^L_{rel}(20)&=\dfrac{\Delta^L_{abs}(20)}{\pi\left(e^{21}\right)-\pi\left(e^{20}\right)}\approx -0,147\,\% \end{array}
This does not prove my claim, but gives evidence that your formula gets better with higher numbers, and that Legendre is unbeatable.

P.S.: The game you're playing is contagious. 8-)
lol oh no I have created a monster. I really do love this game. Been playing it for probably 20 years. The very first time I heard that the distribution of primes was random I was hooked. The very first thing I did was find some graph paper and started to write the integers in a spiral and circle the primes. Freaked out when I noticed the long lines. Then when I found out that this was only discovered in the 1960s I realized this is a game that anybody can play. If I could find something that was only discovered 60 years ago, which is like yesterday in the math world, anyone could.

Did you watch my first video on using the golden ratio for primes? I found how you can also sum primes to get prime density. And that was only discovered a few years ago. It’s very addictive. :)
 
I really do love this game. Been playing it for probably 20 years. The very first time I heard that the distribution of primes was random I was hooked.
That makes mathematics different from other sciences. You can basically "define" things and prove theorems about it. Whether they serve a purpose is another question. I also have a hobbyhorse. I made a simple observation (provable) that nobody else seems to be interested in. The problem: I have a lot of examples but I cannot find a tool that reliably delivers results. Only a whole bunch of even more examples.
 
That makes mathematics different from other sciences. You can basically "define" things and prove theorems about it. Whether they serve a purpose is another question. I also have a hobbyhorse. I made a simple observation (provable) that nobody else seems to be interested in. The problem: I have a lot of examples but I cannot find a tool that reliably delivers results. Only a whole bunch of even more examples.
Care to this observation? Keep it simple for me please.
 
Care to this observation? Keep it simple for me please.
No problem. The only question is where I may start from. It is about Lie algebras. That sounds more complicated than it is, but it would help a lot if I may use matrices and their multiplication.
 
No problem. The only question is where I may start from. It is about Lie algebras. That sounds more complicated than it is, but it would help a lot if I may use matrices and their multiplication.
Well this is a good test to see if you can explain it to a layman.
 
That's a bit like explaining the grammar of if-clauses without using words of Latin origin.

I need a vector space. That is a set of objects which can be added, subtracted, and contains a zero object that does not change an object if added. Moreover, addition needs to be commutative, i.e. symmetric, i.e. X+Y=Y+X X+Y=Y+X and associative, i.e. X+(Y+Z)=(X+Y)+Z, X+(Y+Z)=(X+Y)+Z , the order is irrelevant. Ordinary integers, quotients, or real numbers are an example. However, vector spaces require a second operation, called a multiplication with scalars. The word scalar can be taken literally as it means to scale the objects, i.e. we can stretch them or compress them, or don't change their value if the scaling factor is one. Arrows are such objects. They can be added like linear forces can be added, and their lengths change if we multiply them with a scalar, a scaling factor. We do this every day without recognition - sorry, I'll keep it metric - if we buy 100 g sweets and the price on the shelf is per kg. Then we compress the weight force of 1kg by a factor 10 and carry home 100g sweets. Weights are vectors: they can be added, subtracted, and scaled. And they are arrows as they have a direction. They always point from the mass to the center of the Earth. The elements of a vector space are called vectors. I need square matrices, square number schemes as vectors, e.g.
(a11a12a13a21a22a23a31a32a33). \begin{pmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{pmatrix}. Addition and scalar multiplication are componentwise. The zero vector 0 0 is the matrix with zeros everywhere.

(a11a12a13a21a22a23a31a32a33)+(b11b12b13b21b22b23b31b32b33)=(a11+b11a12+b12a13+b13a21+b21a22+b22a23+b23a31+b31a32+b32a33+b33)c(a11a12a13a21a22a23a31a32a33)=(ca11ca12ca13ca21ca22ca23ca31ca32ca33)\begin{array}{lll} \begin{pmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{pmatrix}+\begin{pmatrix}b_{11}&b_{12}&b_{13}\\b_{21}&b_{22}&b_{23}\\b_{31}&b_{32}&b_{33}\end{pmatrix}&=\begin{pmatrix}a_{11}+b_{11}&a_{12}+b_{12}&a_{13}+b_{13}\\a_{21}+b_{21}&a_{22}+b_{22}&a_{23}+b_{23}\\a_{31}+b_{31}&a_{32}+b_{32}&a_{33}+b_{33}\end{pmatrix}\\[24pt] c\cdot \begin{pmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{pmatrix}&=\begin{pmatrix}ca_{11}&ca_{12}&ca_{13}\\ca_{21}&ca_{22}&ca_{23}\\ca_{31}&ca_{32}&ca_{33}\end{pmatrix} \end{array}
Now, it gets a little bit messy. We have another multiplication: vectors times vectors, or here matrices times matrices. It is defined as (I only take three rows and columns, but any positive integer is possible)

(a11a12a13a21a22a23a31a32a33)(b11b12b13b21b22b23b31b32b33)=(a11b11+a12b21+a13b31    a11b12+a12b22+a13b32    a11b13+a13b23+a13b33a21b11+a22b21+a23b31    a21b12+a22b22+a23b32    a21b13+a22b23+a23b33a31b11+a32b21+a33b31    a31b12+a32b22+a33b32    a31b13+a33b23+a33b33)\begin{array}{lll} \begin{pmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{pmatrix}\, \cdot \,\begin{pmatrix}b_{11}&b_{12}&b_{13}\\b_{21}&b_{22}&b_{23}\\b_{31}&b_{32}&b_{33}\end{pmatrix}= \begin{pmatrix} a_{11}b_{11}+a_{12}b_{21}+a_{13}b_{31}\;&\;a_{11}b_{12}+a_{12}b_{22}+a_{13}b_{32}\;&\;a_{11}b_{13}+a_{13}b_{23}+a_{13}b_{33}\\ a_{21}b_{11}+a_{22}b_{21}+a_{23}b_{31}\;&\;a_{21}b_{12}+a_{22}b_{22}+a_{23}b_{32}\;&\;a_{21}b_{13}+a_{22}b_{23}+a_{23}b_{33}\\ a_{31}b_{11}+a_{32}b_{21}+a_{33}b_{31}\;&\;a_{31}b_{12}+a_{32}b_{22}+a_{33}b_{32}\;&\;a_{31}b_{13}+a_{33}b_{23}+a_{33}b_{33} \end{pmatrix} \end{array}
Matrix multiplication can be memorized as "row times column". The first index always names the row, the second one the column. It is still associative but not commutative anymore, i.e. A(BC)=(AB)C A\cdot (B\cdot C)=(A\cdot B)\cdot C but ABi.g.BA. A\cdot B\neq_{i.g.}B\cdot A. The subscript i.g. means in general. For example, matrices which only have non-zero entries on their diagonal commutate, general matrices do not.

We next define yet another multiplication by setting [A,B]=ABBA. [A,B]=A\cdot B-B\cdot A. This multiplication has some strange properties:
1.) [A,B+C]=[A,B]+[A,C] [A,B+C]=[A,B]+[A,C] and [A+B,C]=[A,C]+[B,C] [A+B,C]=[A,C]+[B,C]
2.) c[A,B]=[cA,B]=[A,cB] c [A,B]=[cA,B]=[A,cB] for real numbers cR c\in \mathbb{R}
3.) [A,A]=0 [A,A]=0
4.) [A,[B,C]]+[B,[C,A]]+[C,[A,B]]=0 [A,[B,C]]+[B,[C,A]]+[C,[A,B]]=0

Such a structure, here a real vector space with [.,.] [.,.] as multiplication is called a Lie algebra. It is less complicated if we look at examples. For instance all matrices of the form (xy00)\begin{pmatrix}x&y\\0&0\end{pmatrix} build a Lie algebra, or all matrices of the form (xyzx). \begin{pmatrix}x&y\\z&-x\end{pmatrix} .

There is no category of objects in mathematics without corresponding functions. In the case of vector spaces, these are linear transformations. I need linear transformations between the same Lie algebra, i.e. if L L denotes the Lie algebra, I need functions φ \varphi with the properties
1.) φ:LL\varphi\, : \,L\longrightarrow L
2.) φ(A+B)=φ(A)+φ(B) \varphi(A+B)=\varphi(A)+\varphi(B)
3.) φ(cA)=cφ(A) \varphi(cA)=c\varphi(A) for any cR c\in \mathbb{R}

Now I have everything I need. I finally define the vector space of anti-symmetric linear transformations of a real Lie algebra L L as the set
A(L)={α:LLα is a linear transformation and [α(X),Y]=[X,α(Y)] for all X,YL}. \mathfrak{A}(L)=\{\alpha\, : \,L\longrightarrow L\,|\,\alpha \text{ is a linear transformation and } [\alpha(X),Y]=-[X,\alpha(Y)]\text{ for all }X,Y\in L\}.
My observation was that A(L) \mathfrak{A}(L) is again a Lie algebra. The vector space structure is easy:
(α+β)(X)=α(X)+β(X) and (cα)(X)=cα(X) for cR (\alpha+\beta)(X)=\alpha(X)+\beta(X) \text{ and }(c\cdot \alpha)(X)=c\cdot \alpha(X)\text{ for }c\in \mathbb{R}
And I would need another lengthy post to explain that A(L) \mathfrak{A}(L) is a L L -module, that A(L)={0} \mathfrak{A}(L)=\{0\} in case L L is simple, and A(L){0} \mathfrak{A}(L)\neq \{0\} in case L L is solvable. However, these are the facts where it starts to get interesting. All simple Lie algebras are fully classified, and the solvable Lie algebras are a mess. There are only a few known facts about them and they are far away from being even sorted in a way. The fact that A(L)=0 \mathfrak{A}(L)=0 for simple Lie algebras and A(L)0 \mathfrak{A}(L)\neq 0 for solvable Lie algebras makes it interesting. You can relatively easy check that

A({(xy00)}solvable)={(abca)}simple and A({(abca)})={(0000)} \mathfrak{A}\left(\underbrace{\left\{\begin{pmatrix}x&y \\0&0\end{pmatrix}\right\}}_{\text{solvable}}\right) =\underbrace{\left\{\begin{pmatrix}a&b\\c&-a\end{pmatrix}\right\}}_{\text{simple}} \quad \text{ and }\quad \mathfrak{A}\left(\left\{\begin{pmatrix}a&b\\c&-a\end{pmatrix}\right\}\right)=\left\{\begin{pmatrix}0&0\\0&0\end{pmatrix}\right\}
It is not really difficult. It only needs a bit of practice to deal with those entities.
 
That's a bit like explaining the grammar of if-clauses without using words of Latin origin.

I need a vector space. That is a set of objects which can be added, subtracted, and contains a zero object that does not change an object if added. Moreover, addition needs to be commutative, i.e. symmetric, i.e. X+Y=Y+X X+Y=Y+X and associative, i.e. X+(Y+Z)=(X+Y)+Z, X+(Y+Z)=(X+Y)+Z , the order is irrelevant. Ordinary integers, quotients, or real numbers are an example. However, vector spaces require a second operation, called a multiplication with scalars. The word scalar can be taken literally as it means to scale the objects, i.e. we can stretch them or compress them, or don't change their value if the scaling factor is one. Arrows are such objects. They can be added like linear forces can be added, and their lengths change if we multiply them with a scalar, a scaling factor. We do this every day without recognition - sorry, I'll keep it metric - if we buy 100 g sweets and the price on the shelf is per kg. Then we compress the weight force of 1kg by a factor 10 and carry home 100g sweets. Weights are vectors: they can be added, subtracted, and scaled. And they are arrows as they have a direction. They always point from the mass to the center of the Earth. The elements of a vector space are called vectors. I need square matrices, square number schemes as vectors, e.g.
(a11a12a13a21a22a23a31a32a33). \begin{pmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{pmatrix}. Addition and scalar multiplication are componentwise. The zero vector 0 0 is the matrix with zeros everywhere.

(a11a12a13a21a22a23a31a32a33)+(b11b12b13b21b22b23b31b32b33)=(a11+b11a12+b12a13+b13a21+b21a22+b22a23+b23a31+b31a32+b32a33+b33)c(a11a12a13a21a22a23a31a32a33)=(ca11ca12ca13ca21ca22ca23ca31ca32ca33)\begin{array}{lll} \begin{pmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{pmatrix}+\begin{pmatrix}b_{11}&b_{12}&b_{13}\\b_{21}&b_{22}&b_{23}\\b_{31}&b_{32}&b_{33}\end{pmatrix}&=\begin{pmatrix}a_{11}+b_{11}&a_{12}+b_{12}&a_{13}+b_{13}\\a_{21}+b_{21}&a_{22}+b_{22}&a_{23}+b_{23}\\a_{31}+b_{31}&a_{32}+b_{32}&a_{33}+b_{33}\end{pmatrix}\\[24pt] c\cdot \begin{pmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{pmatrix}&=\begin{pmatrix}ca_{11}&ca_{12}&ca_{13}\\ca_{21}&ca_{22}&ca_{23}\\ca_{31}&ca_{32}&ca_{33}\end{pmatrix} \end{array}
Now, it gets a little bit messy. We have another multiplication: vectors times vectors, or here matrices times matrices. It is defined as (I only take three rows and columns, but any positive integer is possible)

(a11a12a13a21a22a23a31a32a33)(b11b12b13b21b22b23b31b32b33)=(a11b11+a12b21+a13b31    a11b12+a12b22+a13b32    a11b13+a13b23+a13b33a21b11+a22b21+a23b31    a21b12+a22b22+a23b32    a21b13+a22b23+a23b33a31b11+a32b21+a33b31    a31b12+a32b22+a33b32    a31b13+a33b23+a33b33)\begin{array}{lll} \begin{pmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{pmatrix}\, \cdot \,\begin{pmatrix}b_{11}&b_{12}&b_{13}\\b_{21}&b_{22}&b_{23}\\b_{31}&b_{32}&b_{33}\end{pmatrix}= \begin{pmatrix} a_{11}b_{11}+a_{12}b_{21}+a_{13}b_{31}\;&\;a_{11}b_{12}+a_{12}b_{22}+a_{13}b_{32}\;&\;a_{11}b_{13}+a_{13}b_{23}+a_{13}b_{33}\\ a_{21}b_{11}+a_{22}b_{21}+a_{23}b_{31}\;&\;a_{21}b_{12}+a_{22}b_{22}+a_{23}b_{32}\;&\;a_{21}b_{13}+a_{22}b_{23}+a_{23}b_{33}\\ a_{31}b_{11}+a_{32}b_{21}+a_{33}b_{31}\;&\;a_{31}b_{12}+a_{32}b_{22}+a_{33}b_{32}\;&\;a_{31}b_{13}+a_{33}b_{23}+a_{33}b_{33} \end{pmatrix} \end{array}
Matrix multiplication can be memorized as "row times column". The first index always names the row, the second one the column. It is still associative but not commutative anymore, i.e. A(BC)=(AB)C A\cdot (B\cdot C)=(A\cdot B)\cdot C but ABi.g.BA. A\cdot B\neq_{i.g.}B\cdot A. The subscript i.g. means in general. For example, matrices which only have non-zero entries on their diagonal commutate, general matrices do not.

We next define yet another multiplication by setting [A,B]=ABBA. [A,B]=A\cdot B-B\cdot A. This multiplication has some strange properties:
1.) [A,B+C]=[A,B]+[A,C] [A,B+C]=[A,B]+[A,C] and [A+B,C]=[A,C]+[B,C] [A+B,C]=[A,C]+[B,C]
2.) c[A,B]=[cA,B]=[A,cB] c [A,B]=[cA,B]=[A,cB] for real numbers cR c\in \mathbb{R}
3.) [A,A]=0 [A,A]=0
4.) [A,[B,C]]+[B,[C,A]]+[C,[A,B]]=0 [A,[B,C]]+[B,[C,A]]+[C,[A,B]]=0

Such a structure, here a real vector space with [.,.] [.,.] as multiplication is called a Lie algebra. It is less complicated if we look at examples. For instance all matrices of the form (xy00)\begin{pmatrix}x&y\\0&0\end{pmatrix} build a Lie algebra, or all matrices of the form (xyzx). \begin{pmatrix}x&y\\z&-x\end{pmatrix} .

There is no category of objects in mathematics without corresponding functions. In the case of vector spaces, these are linear transformations. I need linear transformations between the same Lie algebra, i.e. if L L denotes the Lie algebra, I need functions φ \varphi with the properties
1.) φ:LL\varphi\, : \,L\longrightarrow L
2.) φ(A+B)=φ(A)+φ(B) \varphi(A+B)=\varphi(A)+\varphi(B)
3.) φ(cA)=cφ(A) \varphi(cA)=c\varphi(A) for any cR c\in \mathbb{R}

Now I have everything I need. I finally define the vector space of anti-symmetric linear transformations of a real Lie algebra L L as the set
A(L)={α:LLα is a linear transformation and [α(X),Y]=[X,α(Y)] for all X,YL}. \mathfrak{A}(L)=\{\alpha\, : \,L\longrightarrow L\,|\,\alpha \text{ is a linear transformation and } [\alpha(X),Y]=-[X,\alpha(Y)]\text{ for all }X,Y\in L\}.
My observation was that A(L) \mathfrak{A}(L) is again a Lie algebra. The vector space structure is easy:
(α+β)(X)=α(X)+β(X) and (cα)(X)=cα(X) for cR (\alpha+\beta)(X)=\alpha(X)+\beta(X) \text{ and }(c\cdot \alpha)(X)=c\cdot \alpha(X)\text{ for }c\in \mathbb{R}
And I would need another lengthy post to explain that A(L) \mathfrak{A}(L) is a L L -module, that A(L)={0} \mathfrak{A}(L)=\{0\} in case L L is simple, and A(L){0} \mathfrak{A}(L)\neq \{0\} in case L L is solvable. However, these are the facts where it starts to get interesting. All simple Lie algebras are fully classified, and the solvable Lie algebras are a mess. There are only a few known facts about them and they are far away from being even sorted in a way. The fact that A(L)=0 \mathfrak{A}(L)=0 for simple Lie algebras and A(L)0 \mathfrak{A}(L)\neq 0 for solvable Lie algebras makes it interesting. You can relatively easy check that

A({(xy00)}solvable)={(abca)}simple and A({(abca)})={(0000)} \mathfrak{A}\left(\underbrace{\left\{\begin{pmatrix}x&y \\0&0\end{pmatrix}\right\}}_{\text{solvable}}\right) =\underbrace{\left\{\begin{pmatrix}a&b\\c&-a\end{pmatrix}\right\}}_{\text{simple}} \quad \text{ and }\quad \mathfrak{A}\left(\left\{\begin{pmatrix}a&b\\c&-a\end{pmatrix}\right\}\right)=\left\{\begin{pmatrix}0&0\\0&0\end{pmatrix}\right\}
It is not really difficult. It only needs a bit of practice to deal with those entities.
Huh. Interesting. I wonder why no one finds this interesting. It seems like math is like social media. You never know what’s going to go viral. I still come by things that I have never heard before that I find way more interesting than the usual popular maths.
Is there an advantage to this observation?
 
Well, I'm still searching. I hoped that the fact that it kind of reverses the complexity of multiplication, the simpler the (Lie) multiplication is we start with, the more complex the new additional (Lie) multiplication structure is and vice versa, would lead to a method of categorizing those Lie algebras that have a simpler multiplication structure and which we do not know how to sort them out.

It is quite simple to calculate the anti-symmetric transformations. E.g. if we consider the Lie Algebra
L={(xy00)} L=\left\{\begin{pmatrix}x&y\\0&0\end{pmatrix}\right\} then it is linearly spanned by the matrices
X=(1000),Y=(0100)X= \begin{pmatrix}1&0\\0&0\end{pmatrix}\, , \,Y=\begin{pmatrix}0&1\\0&0\end{pmatrix} for which we have the multiplication rule
[X,Y]=XYYX=2Y. [X,Y]=X\cdot Y-Y\cdot X=2Y.
This means if we set up α:LL \alpha : L\to L with α(X)=aX+bY \alpha(X)=aX+bY and α(Y)=cX+dY \alpha(Y)=cX+dY then
0=[α(X),Y]+[X,α(Y)]=[aX+bY,Y]+[X,cX+dY]=a[X,Y]+d[X,Y]=2(a+d)Y\begin{array}{lll} 0&=[\alpha(X),Y]+[X,\alpha(Y)]=[aX+bY,Y]+[X,cX+dY]=a[X,Y]+d[X,Y]=2(a+d)Y \end{array}where I used [X,X]=[Y,Y]=0. [X,X]=[Y,Y]=0. Thus a=d a=-d and
A(L)={(acba)}. \mathfrak{A}(L)=\left\{\begin{pmatrix}a&c\\b&-a\end{pmatrix}\right\} .
That's it. The solvable (and simple structured) L L became the simple (and complex structured) A(L). \mathfrak{A}(L). This works in both directions, e.g., A(A(L))={0}. \mathfrak{A}(\mathfrak{A}(L))=\{0\}. I have been given a link on SE to a paper that deals with similar transformations, but I'm particularly interested in this strange correspondence and the paper focuses on other things. The technical terms solvable and simple are confusing here since they have historical reasons and a precise definition. Their meaning in common English is a bit the opposite of it.
 
Well, I'm still searching. I hoped that the fact that it kind of reverses the complexity of multiplication, the simpler the (Lie) multiplication is we start with, the more complex the new additional (Lie) multiplication structure is and vice versa, would lead to a method of categorizing those Lie algebras that have a simpler multiplication structure and which we do not know how to sort them out.

It is quite simple to calculate the anti-symmetric transformations. E.g. if we consider the Lie Algebra
L={(xy00)} L=\left\{\begin{pmatrix}x&y\\0&0\end{pmatrix}\right\} then it is linearly spanned by the matrices
X=(1000),Y=(0100)X= \begin{pmatrix}1&0\\0&0\end{pmatrix}\, , \,Y=\begin{pmatrix}0&1\\0&0\end{pmatrix} for which we have the multiplication rule
[X,Y]=XYYX=2Y. [X,Y]=X\cdot Y-Y\cdot X=2Y.
This means if we set up α:LL \alpha : L\to L with α(X)=aX+bY \alpha(X)=aX+bY and α(Y)=cX+dY \alpha(Y)=cX+dY then
0=[α(X),Y]+[X,α(Y)]=[aX+bY,Y]+[X,cX+dY]=a[X,Y]+d[X,Y]=2(a+d)Y\begin{array}{lll} 0&=[\alpha(X),Y]+[X,\alpha(Y)]=[aX+bY,Y]+[X,cX+dY]=a[X,Y]+d[X,Y]=2(a+d)Y \end{array}where I used [X,X]=[Y,Y]=0. [X,X]=[Y,Y]=0. Thus a=d a=-d and
A(L)={(acba)}. \mathfrak{A}(L)=\left\{\begin{pmatrix}a&c\\b&-a\end{pmatrix}\right\} .
That's it. The solvable (and simple structured) L L became the simple (and complex structured) A(L). \mathfrak{A}(L). This works in both directions, e.g., A(A(L))={0}. \mathfrak{A}(\mathfrak{A}(L))=\{0\}. I have been given a link on SE to a paper that deals with similar transformations, but I'm particularly interested in this strange correspondence and the paper focuses on other things. The technical terms solvable and simple are confusing here since they have historical reasons and a precise definition. Their meaning in common English is a bit the opposite of it.
I wish I could be of some help but this is not my wheelhouse. I understand the concept of Lie algebra and matrices but the notation is too foreign to me. Do you have enough to consider writing a paper?

Btw I posted a new vid and I mention you a few times. I really dislike making those vids but I feel like it’s the only way to get my ideas out there.
 
Do you have enough to consider writing a paper?

Maybe, it depends a bit on who is judging this. I think it is a bit thin, but when I look at some papers out there, then mine would not be the worst.

Btw I posted a new vid and I mention you a few times. I really dislike making those vids but I feel like it’s the only way to get my ideas out there.

Yes. That's indeed a problem. You must be part of the machinery to publish something in a serious journal.
 
Maybe, it depends a bit on who is judging this. I think it is a bit thin, but when I look at some papers out there, then mine would not be the worst.



Yes. That's indeed a problem. You must be part of the machinery to publish something in a serious journal.
I was playing around with the product of primes and I came across this formula….
IMG_3459.jpeg
Apparently it is true so I thought I would substitute it into the prime number theorem.
IMG_3482.jpeg
I think that’s correct. It isn’t very accurate but it does seem to get better the more terms you use. If you calculate term by term you get…
IMG_3483.jpeg
I checked the online encyclopedia of integer sequences and there was nothing. But each term does seem to give a rough estimate of the amount of primes less than the next term. Any thoughts?
 
This formula is known. We can write
e=limnn#n e=\lim_{n \to \infty} \sqrt[n]{n\#}where n#=p primepnp \displaystyle{n\#=\prod_{p\text{ prime}}^{p\le n} p} is exactly your product. It is noted here:
The notation is called primorial, in case you want to search for a proof of this formula.

I have tried to find an English equivalent but couldn't find one in a reasonable time, sorry. Anyway, if we take it to express en/n e^n/n and "ignore" the limit, then we get
ennn#n \dfrac{e^n}{n}\sim \dfrac{n\#}{n} which is exactly what you wrote.
 
Then I went back to looking at e like this..
1+(1/n)^n.

I am only casually looking at some of these posts. You have misplaced grouping symbols,
or insufficient grouping symbols, depending at how you look at it.

The limit as x --> oo of (1 + 1/n)^n = e

or

the limit as x --> oo of [1 + (1/n)]^n = e.
 
I was playing around with the product of primes and I came across this formula….
View attachment 39312


lookagain said:     \displaystyle \ \ \ \ The alleged formula should be closer to  limp 235711...p p = e,   \displaystyle \ \lim_{p \to \infty} \ \sqrt[p]{2\cdot 3\cdot 5\cdot7\cdot11\cdot ...\cdotp p \ } \ = \ e, \ \ \ where  p \displaystyle \ p \ is a prime number.
 
I am only casually looking at some of these posts. You have misplaced grouping symbols,
or insufficient grouping symbols, depending at how you look at it.

The limit as n --> oo of (1 + 1/n)^n = e

or

the limit as n --> oo of [1 + (1/n)]^n = e.

I had to edit what is in the quote box above. I cannot go back and edit my post #76.
Instead of "The limit as x," I changed both of them in the quote box to correctly
read as "The limit as n ."
 
There are 25\displaystyle 25 prime numbers between 0\displaystyle 0 and 100\displaystyle 100.

They are: 2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97\displaystyle 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97.


How to show this by using your formula?🤔
 
There are 25\displaystyle 25 prime numbers between 0\displaystyle 0 and 100\displaystyle 100.

They are: 2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97\displaystyle 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97.


How to show this by using your formula?🤔
It gets more accurate the higher you go.
 
Top