Let P(x)=k=1∏[x](1+k2)k/2and L(x)=logP(x). Then (deduction follows when I finish my work or get stuck) L(x)=2log(3)+log(2)+[x]−21−H[x]+n=3∑∞(−1)n+1n2n−1(ζ(n−1,3)−ζ(n−1,[x]+1))I needed n−1>1 to use the Hurwitz zeta-function, i.e. n≥3.
This isn't likely the "simplification" you spoke about, and it will get worse when I replace the Hurwitz zeta-function by its integral expression and the harmonic number by H[x]=log[x]+γ+O(1/[x]) with the Euler-macaroni, sorry, Euler-Mascharoni constant γ or by an integral, too.
Damn it. My video is going to only be like 2 mins long if you keep debunking my stuff lol. It’s all good though. I will also show my mistakes. My videos are more about the journey. Not the destination. Just trying to find like minded people who want to have fun with math and try new things.
Here is what I have done. I'm not sure whether this pleases you. It's a little bit complicated.
1) I defined the function P(x)=k=1∏[x](1+k/21)k/2.
2) Your claim is thus about P(14) and it says P(14)∼π(e15)−π(e14) where π is the prime counting function, i.e. in other words P(14)∼141738.
3) You wrote, and I corrected two typos and added the more accurate approximation with the constant 1.08366 in the denominator:
The amount of primes between e14 and e15 is 141738. Using x/log(x) we get 132034.P(14)=135839.07337861. Using x/(logx−1.08366) we get 141798. For further steps, we will use the logarithm of P(x) because sums are easier to handle.
4) Comment: the approximation with 1.08366 is hard to beat. It has only an error of 60 or 0.04%
5) Anyway. Here is what I have done with L(x)=logP(x).
L(x)=log(P(x))=k=1∑[x]2klog(1+k2)=2log(3)+log(2)+k=3∑[x]2klog(1+k2)=2log(3)+log(2)+k=3∑[x]2kn=1∑∞(−1)n+1nkn2n=2log(3)+log(2)+k=3∑[x](1+n=2∑∞(−1)n+1nkn−12n−1)=2log(3)+log(2)+([x]−2)+k=3∑[x](−k1+n=3∑∞(−1)n+1nkn−12n−1)=2log(3)+log(2)+([x]−2)−k=3∑[x]k1+k=3∑[x]n=3∑∞(−1)n+1nkn−12n−1=2log(3)+log(2)+[x]−21−k=1∑[x]k1+n=3∑∞(−1)n−1n2n−1k=3∑[x]kn−11
We have for the harmonic series the formulas
k=1∑[x]k1=H[x]=∫011−t1−t[x]dt=log[x]+γ+2[x]1+……−12[x]21+120[x]41−252[x]61+240[x]81−132[x]101+O([x]121)
and for the Hurwitz zeta-function
ζ(s,q)=k=0∑∞(q+k)s1=Γ(s)1∫0∞1−e−tts−1e−qtdt,q>0,s>1
the formulas
k=3∑[x]kn−11=k=0∑[x]−3(3+k)n−11=k=0∑∞(3+k)n−11−k=[x]−2∑∞(3+k)n−11=ζ(n−1,3)−m=0∑∞([x]+1+m)n−11=ζ(n−1,3)−ζ(n−1,[x]+1)=(n−2)!1∫0∞1−e−ttn−2(e−3t−e−([x]+1)t)dt
That makes in total
L(x)=2log(3)+log(2)+[x]−21−∫011−t1−t[x]dt+……+∫0∞1−e−te−3t−e−([x]+1)t(n=3∑∞(−1)n−1n(n−2)!2n−1tn−2)dt
Now (using WA) we get
n=3∑∞(−1)n−1n(n−2)!2n−1tn−2=2t22t2−1+2te−2t+e−2t
and thus
I[x](t)=1−e−te−3t−e−([x]+1)t(n=3∑∞(−1)n−1n(n−2)!2n−1tn−2)dt=1−e−te−3t−e−([x]+1)t⋅2t22t2−1+2te−2t+e−2t
and finally
L(x)=log(e23)+[x]−H[x]+∫0∞I[x](t)dt.
Let us test our result with x=14.
L(14)H(14)∫0∞I14(t)dtL(14)=logP(14)=log(135839.07337861)≈11.81922618=3603601171733≈0.328335≈13.5+log(23)−3603601171733+0.328335≈11.819226
There is a small error caused by the approximation of the integral by WA.
6) At least you have a nice formula and you could test values for x other than 14. I think they get better with increasing x but not as good as Legendre's approximation.
7) So much to: "within minutes". Nice constant by the way.
I have checked the numbers for P(20) and the number of primes between e21 and e20 and compared them with the numbers for P(14) and the number of primes between e15 and e14 and both with Legendre's approximation. I have found:
Δabs(14)Δrel(14)ΔabsL(14)ΔrelL(14)Δabs(20)Δrel(20)ΔabsL(20)ΔrelL(20)=π(e15)−π(e14)−[P(14)]=5899=π(e15)−π(e14)Δabs(14)≈4,162%=π(e15)−π(e14)−[15−1,08366e15]+[14−1,08366e14]=141738−234905+93107=−60=π(e15)−π(e14)ΔabsL(14)≈−0,0423%=π(e21)−π(e20)−[P(20)]=774.075=π(e21)−π(e20)Δabs(20)≈1,911%=π(e21)−π(e20)−[21−1,08366e21]+[20−1,08366e20]=40510215−66217776+25647942=−59619=π(e21)−π(e20)ΔabsL(20)≈−0,147%
This does not prove my claim, but gives evidence that your formula gets better with higher numbers, and that Legendre is unbeatable.
I have checked the numbers for P(20) and the number of primes between e21 and e20 and compared them with the numbers for P(14) and the number of primes between e15 and e14 and both with Legendre's approximation. I have found:
Δabs(14)Δrel(14)ΔabsL(14)ΔrelL(14)Δabs(20)Δrel(20)ΔabsL(20)ΔrelL(20)=π(e15)−π(e14)−[P(14)]=5899=π(e15)−π(e14)Δabs(14)≈4,162%=π(e15)−π(e14)−[15−1,08366e15]+[14−1,08366e14]=141738−234905+93107=−60=π(e15)−π(e14)ΔabsL(14)≈−0,0423%=π(e21)−π(e20)−[P(20)]=774.075=π(e21)−π(e20)Δabs(20)≈1,911%=π(e21)−π(e20)−[21−1,08366e21]+[20−1,08366e20]=40510215−66217776+25647942=−59619=π(e21)−π(e20)ΔabsL(20)≈−0,147%
This does not prove my claim, but gives evidence that your formula gets better with higher numbers, and that Legendre is unbeatable.
lol oh no I have created a monster. I really do love this game. Been playing it for probably 20 years. The very first time I heard that the distribution of primes was random I was hooked. The very first thing I did was find some graph paper and started to write the integers in a spiral and circle the primes. Freaked out when I noticed the long lines. Then when I found out that this was only discovered in the 1960s I realized this is a game that anybody can play. If I could find something that was only discovered 60 years ago, which is like yesterday in the math world, anyone could.
Did you watch my first video on using the golden ratio for primes? I found how you can also sum primes to get prime density. And that was only discovered a few years ago. It’s very addictive.
I really do love this game. Been playing it for probably 20 years. The very first time I heard that the distribution of primes was random I was hooked.
That makes mathematics different from other sciences. You can basically "define" things and prove theorems about it. Whether they serve a purpose is another question. I also have a hobbyhorse. I made a simple observation (provable) that nobody else seems to be interested in. The problem: I have a lot of examples but I cannot find a tool that reliably delivers results. Only a whole bunch of even more examples.
That makes mathematics different from other sciences. You can basically "define" things and prove theorems about it. Whether they serve a purpose is another question. I also have a hobbyhorse. I made a simple observation (provable) that nobody else seems to be interested in. The problem: I have a lot of examples but I cannot find a tool that reliably delivers results. Only a whole bunch of even more examples.
No problem. The only question is where I may start from. It is about Lie algebras. That sounds more complicated than it is, but it would help a lot if I may use matrices and their multiplication.
No problem. The only question is where I may start from. It is about Lie algebras. That sounds more complicated than it is, but it would help a lot if I may use matrices and their multiplication.
That's a bit like explaining the grammar of if-clauses without using words of Latin origin.
I need a vector space. That is a set of objects which can be added, subtracted, and contains a zero object that does not change an object if added. Moreover, addition needs to be commutative, i.e. symmetric, i.e. X+Y=Y+X and associative, i.e. X+(Y+Z)=(X+Y)+Z, the order is irrelevant. Ordinary integers, quotients, or real numbers are an example. However, vector spaces require a second operation, called a multiplication with scalars. The word scalar can be taken literally as it means to scale the objects, i.e. we can stretch them or compress them, or don't change their value if the scaling factor is one. Arrows are such objects. They can be added like linear forces can be added, and their lengths change if we multiply them with a scalar, a scaling factor. We do this every day without recognition - sorry, I'll keep it metric - if we buy 100 g sweets and the price on the shelf is per kg. Then we compress the weight force of 1kg by a factor 10 and carry home 100g sweets. Weights are vectors: they can be added, subtracted, and scaled. And they are arrows as they have a direction. They always point from the mass to the center of the Earth. The elements of a vector space are called vectors. I need square matrices, square number schemes as vectors, e.g. ⎝⎛a11a21a31a12a22a32a13a23a33⎠⎞.Addition and scalar multiplication are componentwise. The zero vector 0 is the matrix with zeros everywhere.
⎝⎛a11a21a31a12a22a32a13a23a33⎠⎞+⎝⎛b11b21b31b12b22b32b13b23b33⎠⎞c⋅⎝⎛a11a21a31a12a22a32a13a23a33⎠⎞=⎝⎛a11+b11a21+b21a31+b31a12+b12a22+b22a32+b32a13+b13a23+b23a33+b33⎠⎞=⎝⎛ca11ca21ca31ca12ca22ca32ca13ca23ca33⎠⎞
Now, it gets a little bit messy. We have another multiplication: vectors times vectors, or here matrices times matrices. It is defined as (I only take three rows and columns, but any positive integer is possible)
⎝⎛a11a21a31a12a22a32a13a23a33⎠⎞⋅⎝⎛b11b21b31b12b22b32b13b23b33⎠⎞=⎝⎛a11b11+a12b21+a13b31a21b11+a22b21+a23b31a31b11+a32b21+a33b31a11b12+a12b22+a13b32a21b12+a22b22+a23b32a31b12+a32b22+a33b32a11b13+a13b23+a13b33a21b13+a22b23+a23b33a31b13+a33b23+a33b33⎠⎞
Matrix multiplication can be memorized as "row times column". The first index always names the row, the second one the column. It is still associative but not commutative anymore, i.e. A⋅(B⋅C)=(A⋅B)⋅C but A⋅B=i.g.B⋅A. The subscript i.g. means in general. For example, matrices which only have non-zero entries on their diagonal commutate, general matrices do not.
We next define yet another multiplication by setting [A,B]=A⋅B−B⋅A. This multiplication has some strange properties:
1.) [A,B+C]=[A,B]+[A,C] and [A+B,C]=[A,C]+[B,C]
2.) c[A,B]=[cA,B]=[A,cB] for real numbers c∈R
3.) [A,A]=0
4.) [A,[B,C]]+[B,[C,A]]+[C,[A,B]]=0
Such a structure, here a real vector space with [.,.] as multiplication is called a Lie algebra. It is less complicated if we look at examples. For instance all matrices of the form (x0y0) build a Lie algebra, or all matrices of the form (xzy−x).
There is no category of objects in mathematics without corresponding functions. In the case of vector spaces, these are linear transformations. I need linear transformations between the same Lie algebra, i.e. if L denotes the Lie algebra, I need functions φ with the properties
1.) φ:L⟶L
2.) φ(A+B)=φ(A)+φ(B)
3.) φ(cA)=cφ(A) for any c∈R
Now I have everything I need. I finally define the vector space of anti-symmetric linear transformations of a real Lie algebra L as the set A(L)={α:L⟶L∣α is a linear transformation and [α(X),Y]=−[X,α(Y)] for all X,Y∈L}.
My observation was that A(L) is again a Lie algebra. The vector space structure is easy: (α+β)(X)=α(X)+β(X) and (c⋅α)(X)=c⋅α(X) for c∈R
And I would need another lengthy post to explain that A(L) is a L-module, that A(L)={0} in case L is simple, and A(L)={0} in case L is solvable. However, these are the facts where it starts to get interesting. All simple Lie algebras are fully classified, and the solvable Lie algebras are a mess. There are only a few known facts about them and they are far away from being even sorted in a way. The fact that A(L)=0 for simple Lie algebras and A(L)=0 for solvable Lie algebras makes it interesting. You can relatively easy check that
A⎝⎜⎜⎜⎛solvable{(x0y0)}⎠⎟⎟⎟⎞=simple{(acb−a)} and A({(acb−a)})={(0000)}
It is not really difficult. It only needs a bit of practice to deal with those entities.
That's a bit like explaining the grammar of if-clauses without using words of Latin origin.
I need a vector space. That is a set of objects which can be added, subtracted, and contains a zero object that does not change an object if added. Moreover, addition needs to be commutative, i.e. symmetric, i.e. X+Y=Y+X and associative, i.e. X+(Y+Z)=(X+Y)+Z, the order is irrelevant. Ordinary integers, quotients, or real numbers are an example. However, vector spaces require a second operation, called a multiplication with scalars. The word scalar can be taken literally as it means to scale the objects, i.e. we can stretch them or compress them, or don't change their value if the scaling factor is one. Arrows are such objects. They can be added like linear forces can be added, and their lengths change if we multiply them with a scalar, a scaling factor. We do this every day without recognition - sorry, I'll keep it metric - if we buy 100 g sweets and the price on the shelf is per kg. Then we compress the weight force of 1kg by a factor 10 and carry home 100g sweets. Weights are vectors: they can be added, subtracted, and scaled. And they are arrows as they have a direction. They always point from the mass to the center of the Earth. The elements of a vector space are called vectors. I need square matrices, square number schemes as vectors, e.g. ⎝⎛a11a21a31a12a22a32a13a23a33⎠⎞.Addition and scalar multiplication are componentwise. The zero vector 0 is the matrix with zeros everywhere.
⎝⎛a11a21a31a12a22a32a13a23a33⎠⎞+⎝⎛b11b21b31b12b22b32b13b23b33⎠⎞c⋅⎝⎛a11a21a31a12a22a32a13a23a33⎠⎞=⎝⎛a11+b11a21+b21a31+b31a12+b12a22+b22a32+b32a13+b13a23+b23a33+b33⎠⎞=⎝⎛ca11ca21ca31ca12ca22ca32ca13ca23ca33⎠⎞
Now, it gets a little bit messy. We have another multiplication: vectors times vectors, or here matrices times matrices. It is defined as (I only take three rows and columns, but any positive integer is possible)
⎝⎛a11a21a31a12a22a32a13a23a33⎠⎞⋅⎝⎛b11b21b31b12b22b32b13b23b33⎠⎞=⎝⎛a11b11+a12b21+a13b31a21b11+a22b21+a23b31a31b11+a32b21+a33b31a11b12+a12b22+a13b32a21b12+a22b22+a23b32a31b12+a32b22+a33b32a11b13+a13b23+a13b33a21b13+a22b23+a23b33a31b13+a33b23+a33b33⎠⎞
Matrix multiplication can be memorized as "row times column". The first index always names the row, the second one the column. It is still associative but not commutative anymore, i.e. A⋅(B⋅C)=(A⋅B)⋅C but A⋅B=i.g.B⋅A. The subscript i.g. means in general. For example, matrices which only have non-zero entries on their diagonal commutate, general matrices do not.
We next define yet another multiplication by setting [A,B]=A⋅B−B⋅A. This multiplication has some strange properties:
1.) [A,B+C]=[A,B]+[A,C] and [A+B,C]=[A,C]+[B,C]
2.) c[A,B]=[cA,B]=[A,cB] for real numbers c∈R
3.) [A,A]=0
4.) [A,[B,C]]+[B,[C,A]]+[C,[A,B]]=0
Such a structure, here a real vector space with [.,.] as multiplication is called a Lie algebra. It is less complicated if we look at examples. For instance all matrices of the form (x0y0) build a Lie algebra, or all matrices of the form (xzy−x).
There is no category of objects in mathematics without corresponding functions. In the case of vector spaces, these are linear transformations. I need linear transformations between the same Lie algebra, i.e. if L denotes the Lie algebra, I need functions φ with the properties
1.) φ:L⟶L
2.) φ(A+B)=φ(A)+φ(B)
3.) φ(cA)=cφ(A) for any c∈R
Now I have everything I need. I finally define the vector space of anti-symmetric linear transformations of a real Lie algebra L as the set A(L)={α:L⟶L∣α is a linear transformation and [α(X),Y]=−[X,α(Y)] for all X,Y∈L}.
My observation was that A(L) is again a Lie algebra. The vector space structure is easy: (α+β)(X)=α(X)+β(X) and (c⋅α)(X)=c⋅α(X) for c∈R
And I would need another lengthy post to explain that A(L) is a L-module, that A(L)={0} in case L is simple, and A(L)={0} in case L is solvable. However, these are the facts where it starts to get interesting. All simple Lie algebras are fully classified, and the solvable Lie algebras are a mess. There are only a few known facts about them and they are far away from being even sorted in a way. The fact that A(L)=0 for simple Lie algebras and A(L)=0 for solvable Lie algebras makes it interesting. You can relatively easy check that
A⎝⎜⎜⎜⎛solvable{(x0y0)}⎠⎟⎟⎟⎞=simple{(acb−a)} and A({(acb−a)})={(0000)}
It is not really difficult. It only needs a bit of practice to deal with those entities.
Huh. Interesting. I wonder why no one finds this interesting. It seems like math is like social media. You never know what’s going to go viral. I still come by things that I have never heard before that I find way more interesting than the usual popular maths.
Is there an advantage to this observation?
Well, I'm still searching. I hoped that the fact that it kind of reverses the complexity of multiplication, the simpler the (Lie) multiplication is we start with, the more complex the new additional (Lie) multiplication structure is and vice versa, would lead to a method of categorizing those Lie algebras that have a simpler multiplication structure and which we do not know how to sort them out.
It is quite simple to calculate the anti-symmetric transformations. E.g. if we consider the Lie Algebra L={(x0y0)}then it is linearly spanned by the matrices X=(1000),Y=(0010)for which we have the multiplication rule [X,Y]=X⋅Y−Y⋅X=2Y.
This means if we set up α:L→L with α(X)=aX+bY and α(Y)=cX+dY then 0=[α(X),Y]+[X,α(Y)]=[aX+bY,Y]+[X,cX+dY]=a[X,Y]+d[X,Y]=2(a+d)Ywhere I used [X,X]=[Y,Y]=0. Thus a=−d and A(L)={(abc−a)}.
That's it. The solvable (and simple structured) L became the simple (and complex structured) A(L). This works in both directions, e.g., A(A(L))={0}. I have been given a link on SE to a paper that deals with similar transformations, but I'm particularly interested in this strange correspondence and the paper focuses on other things. The technical terms solvable and simple are confusing here since they have historical reasons and a precise definition. Their meaning in common English is a bit the opposite of it.
Well, I'm still searching. I hoped that the fact that it kind of reverses the complexity of multiplication, the simpler the (Lie) multiplication is we start with, the more complex the new additional (Lie) multiplication structure is and vice versa, would lead to a method of categorizing those Lie algebras that have a simpler multiplication structure and which we do not know how to sort them out.
It is quite simple to calculate the anti-symmetric transformations. E.g. if we consider the Lie Algebra L={(x0y0)}then it is linearly spanned by the matrices X=(1000),Y=(0010)for which we have the multiplication rule [X,Y]=X⋅Y−Y⋅X=2Y.
This means if we set up α:L→L with α(X)=aX+bY and α(Y)=cX+dY then 0=[α(X),Y]+[X,α(Y)]=[aX+bY,Y]+[X,cX+dY]=a[X,Y]+d[X,Y]=2(a+d)Ywhere I used [X,X]=[Y,Y]=0. Thus a=−d and A(L)={(abc−a)}.
That's it. The solvable (and simple structured) L became the simple (and complex structured) A(L). This works in both directions, e.g., A(A(L))={0}. I have been given a link on SE to a paper that deals with similar transformations, but I'm particularly interested in this strange correspondence and the paper focuses on other things. The technical terms solvable and simple are confusing here since they have historical reasons and a precise definition. Their meaning in common English is a bit the opposite of it.
I wish I could be of some help but this is not my wheelhouse. I understand the concept of Lie algebra and matrices but the notation is too foreign to me. Do you have enough to consider writing a paper?
Btw I posted a new vid and I mention you a few times. I really dislike making those vids but I feel like it’s the only way to get my ideas out there.
I was playing around with the product of primes and I came across this formula….
Apparently it is true so I thought I would substitute it into the prime number theorem.
I think that’s correct. It isn’t very accurate but it does seem to get better the more terms you use. If you calculate term by term you get…
I checked the online encyclopedia of integer sequences and there was nothing. But each term does seem to give a rough estimate of the amount of primes less than the next term. Any thoughts?
I have tried to find an English equivalent but couldn't find one in a reasonable time, sorry. Anyway, if we take it to express en/n and "ignore" the limit, then we get nen∼nn#which is exactly what you wrote.
I am only casually looking at some of these posts. You have misplaced grouping symbols,
or insufficient grouping symbols, depending at how you look at it.
I am only casually looking at some of these posts. You have misplaced grouping symbols,
or insufficient grouping symbols, depending at how you look at it.
I had to edit what is in the quote box above. I cannot go back and edit my post #76.
Instead of "The limit as x," I changed both of them in the quote box to correctly
read as "The limit as n ."
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.