On Sat Feb 23 19:20:20 2013, KRYDE wrote:
Show quoted text> Thanks. I think the sieve was first and seemed like a good idea at the
> time. I didn't realize the prime factorization was now better.
It's also possible that different machines or arguments would result in
different timings. I just did timing on a couple x86 Linux machines.
Show quoted text> > The sum change is a hack because Totient->ith(0) returns 1 instead of 0
> > like _totient_by_sieve(0). Technically it's undefined,
>
> Hmm. Perhaps it was never meant for i=0. I probably mean i<=0 in the
> "undef" condition of the ith() func of Totient.pm for a start.
Quite possible. Apostol and Koblitz define it only for n >= 1. Hardy
and Wright define it as "the number of positive integers not greater
than and prime to m, that is to say the number of integers n such that 0
< n <= m, (n,m) = 1" which would support the idea that phi(n) = 0 for n
<= 0.
Wolfram says: "By convention, phi(0)=1, although Mathematica defines
EulerPhi[0] equal to 0 for consistency with its FactorInteger[0] command."
Show quoted text> > ith($i) could be done using
> > List::Util::sum(euler_phi(0,$i)) which takes slightly over 1s for 10M.
>
> That pushes $i many return values on the stack does it (the perl stack
> that is)? Something done in chunks might be a compromise between size
> and time.
Yes, doing the one big sum could lead to big memory issues. Segmenting
it would be better. Right now MPU's ranged moebius function is
segmented, but the totient isn't. It can still be helpful since the C
array is a lot smaller, but at some point I'll need to do it right.
I was interested in ranged totients for things like Project Euler
solutions, but I'd never come across a need for the summation. I added
the Mertens function separately for summing the Möbius function, as
there are efficient ways to calculate it.
Show quoted text> I had meant to try calling some of your funcs as available, but never
> got around to it. In a few places I had the urge to try to support
> bignums which are entirely small prime factors with some sort of limit
> on the trial division based on the size of the input. Doing so can
> sometimes get answers when plotting big or slightly big numbers. Though
> where to put a limit is a bit arbitrary.
No problem -- there is something to be said for using well tested
modules. The one thing it would allow is 64-bit (or bignum) support for
a lot of functions. The fast prime_count and nth_prime might be nice.
divisor_sum and jordan totient help with a few more OEIS series. I've
thought about making a list of OEIS sequences and the code to generate
them, especially for the ones where they're ~ one-liners.
As for the bignum idea, do you mean doing a gcd on primorials? I have
some areas that do GCDs on primes up to 20046, but they're just for
speed of quick primality checking on big (512 - 4096 bit) numbers. It's
very fast with GMP or Pari, but the default BigInt backend is deathly
slow for big GCDs.
It looks like for factoring on my 64-bit machine, MPU averages under 100
microseconds for factoring random 19-digit numbers. Restricting to
numbers with no factors under 100 still results in times under 1
millisecond per number. In fact, it's faster than calling
Math::Pari::factorint for almost all 64-bit inputs. My SQUFOF code
could still use a little more optimization. Pari is definitely better
once we get past 25 or so digits, as my ECM implementation is pretty
basic and I don't have a QS implemented. Still, it should let you
factor most 30 digit numbers in under a second (that's pathetic by the
standards of yafu, msieve, etc. but pretty good for Perl).