This week's book giveaway is in the Jobs Discussion forum. We're giving away four copies of Customer Requirements for Developers and have Marcho Behler on-line! See this thread for details.

Given an array of N positive integers, all numbers having precisely 10 digits, I want to fill the array with the product of all numbers except the number in that cell.

What is the best complexity you can get ( taking into consideration the cost of computing a*b is never 1) ?

-----------
Just for a start , I will compute the cost of computing the product of all the numbers in the array:

Cost(product) = Sum(Cost( a number with >= 10*i digits * a number with 10 digits ) )

Assuming one uses an inefficient method to compute a*b , meaning the traditional n^2 algorithm , then Cost( a number with 10*i digits * a number with 10 digits ) = 11 * i

Considering this it is obvious that the cost of the product algorithm is O(n^2).

Myke Enriq wrote:
What is the best complexity you can get ( taking into consideration the cost of computing a*b is never 1) ?

Please elaborate on this... are you saying that the cost to multiply two numbers should not be order of one?

Henry

Obviously , the cost of multiplying 2 numbers is never 1. It is always a function of the number of digits of those 2 numbers.

if a has n digits and b has m digits , then the cost of multiplying a*b is a function Cost(n,m).

---------------------------

Having said that , the classical algorithm that we used to multiply 2 numbers on a piece of paper , has complexity O(n^2) , if both numbers have n digits.

Myke Enriq wrote:
Obviously , the cost of multiplying 2 numbers is never 1. It is always a function of the number of digits of those 2 numbers.

if a has n digits and b has m digits , then the cost of multiplying a*b is a function Cost(n,m).

---------------------------

Having said that , the classical algorithm that we used to multiply 2 numbers on a piece of paper , has complexity O(n^2) , if both numbers have n digits.

Then you are kinda using the wrong terminology here. Or trying to use terminology for something that isn't designed to do so.

First, remember that when something has the complexity O(n^2), it is to measure it growth as n grows to infinity. It is not "if both numbers have n digits"; the assumption is always "as n goes to infinity".

But second, the first point is moot. It doesn't matter what the complexity of multiply two numbers is -- even if n is infinity. The original question is about an N for the size of an array -- and the number of digits is fixed at 10. Big-O operations can only calculate the complexity as one term goes to infinity. If more than one term goes to infinity, then it is based on the term that grows faster.

Henry

Myke Enriq
Ranch Hand

Joined: Feb 13, 2012
Posts: 115

posted

0

Having N numbers of 10 digits each , means that when we try to compute the product of all those N numbers we may not and must not consider multiplication of 2 numbers as having cost 1.

The reason for that , is that while the cost is a constant O(1) when computing the multiplication of 2 nubers of 10 digits each, the cost of computing the multiplication of
2 numbers having N/2 digits each (or one having N/2 digits and the other 10 digits , and so on) is no longer O(1).

Henry Wong wrote:

Myke Enriq wrote:
Obviously , the cost of multiplying 2 numbers is never 1. It is always a function of the number of digits of those 2 numbers.

if a has n digits and b has m digits , then the cost of multiplying a*b is a function Cost(n,m).

---------------------------

Having said that , the classical algorithm that we used to multiply 2 numbers on a piece of paper , has complexity O(n^2) , if both numbers have n digits.

Then you are kinda using the wrong terminology here. Or trying to use terminology for something that isn't designed to do so.

First, remember that when something has the complexity O(n^2), it is to measure it growth as n grows to infinity. It is not "if both numbers have n digits"; the assumption is always "as n goes to infinity".

But second, the first point is moot. It doesn't matter what the complexity of multiply two numbers is -- even is n is infinity. The original question is about an N for the size of an array -- and the number of digits is fixed at 10. Big-O operations can only calculate the complexity as one term goes to infinity. If more than one term goes to infinity, then it is based on the term that grows faster.

Myke Enriq wrote:Having N numbers of 10 digits each , means that when we try to compute the product of all those N numbers we may not and must not consider multiplication of 2 numbers as having cost 1.

The reason for that , is that while the cost is a constant O(1) when computing the multiplication of 2 nubers of 10 digits each, the cost of computing the multiplication of
2 numbers having N/2 digits each (or one having N/2 digits and the other 10 digits , and so on) is no longer O(1).

First, Big-O notation do not measure cost. Please don't mix those terms. I can have two algorithms; one takes a second to calculate something regardless of the size of the array I send to it, and one takes a hour to calculate the same something regardless of the size of the array that I send to it; And they both have a big O of one.

Second, if there is more than one terms that grows, then only the bigger terms counts. Remember as N or n goes to infinity, one of the terms will disappear. In this case, as the N array goes to infinity, the 10-digit multiply doesn't matter.

In other words, you need to elaborate which term grows faster. Or the relationship of growth between them (such as n = N / 2).

Henry

Myke Enriq
Ranch Hand

Joined: Feb 13, 2012
Posts: 115

posted

0

Big O notation is a measure of cost. It tries to give an upper bound to the cost of an algorithm.

As for a number of k digits being multiplied by a number of 10 digits. This would not matter in regards to the big O notation (meaning we can consider it with cost O(1) ) if the result of that number would be a number that still has k digits. Since when multiplying the 2 , you get a number of k+10 digits , and then himself is the input for some other step of your algorithm it means that it matters.

--------

For example, lets assume that given N numbers 10 digits each one assumes wrongly that the cost of multiplying any 2 numbers (no matter what their size is) is constant.
Then the brute force solution would be O(N). This is clearly not so , as if you write the algorithm and make the graph of the running times , you will see it is not upper bounded by any linear function.

Henry Wong wrote:

First, Big-O notation do not measure cost. Please don't mix those terms. I can have two algorithms; one takes a second to calculate something regardless of the size of the array I send to it, and one takes a hour to calculate the same something regardless of the size of the array that I send to it; And they both have a big O of one.

Second, if there is more than one terms that grows, then only the bigger terms counts. Remember as N or n goes to infinity, one of the terms will disappear. In this case, as the N array goes to infinity, the 10-digit multiply doesn't matter.

In other words, you need to elaborate which term grows faster. Or the relationship of growth between them (such as n = N / 2).

if every number in the array is 10 digits, ("all numbers having precisely 10 digits"), the cost of multiplying the 1st and the 18th is exactly the same as multiplying the 97th and the 2,9836,883rd. No matter which two numbers you pick the cost of multiplying them is the same, since it is always a 10-digit number times a 10-digit number.

So, the cost of doing the multiplication can be ignored in the computation.

There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors

Myke Enriq
Ranch Hand

Joined: Feb 13, 2012
Posts: 115

posted

0

fred rosenberger wrote:if every number in the array is 10 digits, ("all numbers having precisely 10 digits"), the cost of multiplying the 1st and the 18th is exactly the same as multiplying the 97th and the 2,9836,883rd. No matter which two numbers you pick the cost of multiplying them is the same, since it is always a 10-digit number times a 10-digit number.

So, the cost of doing the multiplication can be ignored in the computation.

I agree that the cost of multiplying 2 numbers having 10 digits each is a constant = c1.

However , if you have multiplied N-1 numbers out of those N and so you have a product p of 10(N-1) digits, then the cost of multiplying your product with the last 10 digit number becomes O(N).

Myke Enriq wrote:
I agree that the cost of multiplying 2 numbers having 10 digits each is a constant = c1.

However , if you have multiplied N-1 numbers out of those N and so you have a product p of 10(N-1) digits, then the cost of multiplying your product with the last 10 digit number becomes O(N).

I agree. The complexity to multiply N numbers is O(N).

And... The complexity to multiply N-1 numbers is also O(N). The complexity to multiply 2 numbers is still O(1) though.

Myke Enriq wrote:
I agree that the cost of multiplying 2 numbers having 10 digits each is a constant = c1.

However , if you have multiplied N-1 numbers out of those N and so you have a product p of 10(N-1) digits, then the cost of multiplying your product with the last 10 digit number becomes O(N).

I agree. The complexity to multiply N numbers is O(N).

And... The complexity to multiply N-1 numbers is also O(N). The complexity to multiply 2 numbers is still O(1) though.

Henry

You may be simplifying too early. The complexity depends on the number of digits in the numbers. Normally this doesn't matter because we limit ourselves to data types with a fixed range, but when considering arbitrary precision numbers we have to account for the number of digits. The number of digits needed depends on the size of the list. As the list grows, the maximum possible number grows. Try calculating the number of basic operations required first before applying big-O notation.

Sresh Rangi wrote:
You may be simplifying too early. The complexity depends on the number of digits in the numbers. Normally this doesn't matter because we limit ourselves to data types with a fixed range, but when considering arbitrary precision numbers we have to account for the number of digits. The number of digits needed depends on the size of the list. As the list grows, the maximum possible number grows. Try calculating the number of basic operations required first before applying big-O notation.

Please read the context of the response first.

Yes, in the discussion of a multiply algorithm , it is O(N^2) for N is related to the number of digits in the numbers... but we are not talking about that. We are talking about N is the number of elements in the array of numbers and 10 being the number of digits. From this, it is O(1) for the multiply of 2 numbers.

Now, you can argue the case where the N in the number of digits is either larger than the number of numbers to be multiplied, or there is a relationship, then I change my statement. But the OP didn't say that... the OP stated N as the number of numbers and that it is 10 digits numbers. From that, and until the OP elaborates, we can only infer that we are measuring in terms of the number of numbers, and not the size of the numbers (in digits).

Henry

Sresh Rangi
Ranch Hand

Joined: Nov 28, 2012
Posts: 49

2

posted

0

Henry Wong wrote:

Sresh Rangi wrote:
You may be simplifying too early. The complexity depends on the number of digits in the numbers. Normally this doesn't matter because we limit ourselves to data types with a fixed range, but when considering arbitrary precision numbers we have to account for the number of digits. The number of digits needed depends on the size of the list. As the list grows, the maximum possible number grows. Try calculating the number of basic operations required first before applying big-O notation.

Please read the context of the response first.

Yes, in the discussion of a multiply algorithm , it is O(N^2) for N is related to the number of digits in the numbers... but we are not talking about that. We are talking about N is the number of elements in the array of numbers and 10 being the number of digits. From this, it is O(1) for the multiply of 2 numbers.

Now, you can argue the case where the N in the number of digits is either larger than the number of numbers to be multiplied, or there is a relationship, then I change my statement. But the OP didn't say that... the OP stated N as the number of numbers and that it is 10 digits numbers. From that, and until the OP elaborates, we can only infer that we are measuring in terms of the number of numbers, and not the size of the numbers (in digits).

Henry

We are measuring in terms of the number of numbers. You have a list of N numbers each with 10 digits, but the size of the numbers that are being multiplied depend on the size of the list. The OP mentioned this relationship in the post you replied to.

Take a list made up of 2 digit numbers and multiply them together:

The number of digits in the last number is much more than the 2 we started with. This can grow the same as the number of numbers in the list so can't be ignored.

I think Sresh and Myke are right - the original post speaks about the product of all - N - numbers in the array (except one, to be precise). The number of digits in this product is proportional to N. And the complexity to compute a product of N numbers is O(N^2) - bigger N means more factors to multiply as well as more digits in the intermediate results.

Now, the naive approach would be to compute N products of N-1 numbers each (that is, compute each cell from scratch), which would give the total complexity of O(N^3).

A better approach might be to compute product of all numbers once (complexity O(N^2)), and then divide the resulting product by number in each cell of the array to obtain the result to put to that cell. If the complexity of dividing a number of N digits by 10-digit number is O(N) - and I think it is - then doing N such divisions gives complexity O(N^2). So we have two O(N^2) operations, which gives resulting complexity O(N^2).

Martin Vajsar wrote:I think Sresh and Myke are right - the original post speaks about the product of all - N - numbers in the array (except one, to be precise). The number of digits in this product is proportional to N. And the complexity to compute a product of N numbers is O(N^2) - bigger N means more factors to multiply as well as more digits in the intermediate results.

Interesting. So, we are not talking about using any of the primitive data types... but talking about some sort of data type that can handle the exact calculation, such as the BigInteger class.

Martin Vajsar wrote:
Now, the naive approach would be to compute N products of N-1 numbers each (that is, compute each cell from scratch), which would give the total complexity of O(N^3).

A better approach might be to compute product of all numbers once (complexity O(N^2)), and then divide the resulting product by number in each cell of the array to obtain the result to put to that cell. If the complexity of dividing a number of N digits by 10-digit number is O(N) - and I think it is - then doing N such divisions gives complexity O(N^2). So we have two O(N^2) operations, which gives resulting complexity O(N^2).

I am not convinced that it is that easy. The value of N are not the same during the series of calculations -- meaning for the calculation of two numbers, it may be O(N^2), but for a series of N multiplies, it may not be as simple as O(N^3), as not all the calculations are the same size. Or in other words, the calculation is not as complex in the beginning.

Henry Wong wrote:Interesting. So, we are not talking about using any of the primitive data types... but talking about some sort of data type that can handle the exact calculation, such as the BigInteger class.

I've assumed this right from the start. Computing product of ten-digit numbers grows out of the int (or long) range pretty quickly. But unfortunately it was not stated anywhere.

I am not convinced that it is that easy. The value of N are not the same during the series of calculations -- meaning for the calculation of two numbers, it may be O(N^2), but for a series of N multiplies, it may not be as simple as O(N^3), as not all the calculations are the same size. Or in other words, the calculation is not as complex in the beginning.

I'm really not sure. I believe the complexity is proportional to the total number of digits to handle, and I'll try to estimate that (for product of N ten-digit numbers):

If each factor has ten digits, then the number of digits of product of N factors is approximately 10*N. If i is the index of the intermediate result, computing each intermediate result will require multiplication involving 10 * i digits. Total count of all digits is then given as sum or linear series, which would be approx. 10*N*(N-1)/2 (10 obviously being the number of digits). That gives complexity O(N^2) for computing the product of N numbers.

Now, the naive approach requires computing product of N-1 numbers N times, and that gives complexity O(N^3) for solving the puzzle.

Martin Vajsar wrote:
If each factor has ten digits, then the number of digits of product of N factors is approximately 10*N. If i is the index of the intermediate result, computing each intermediate result will require multiplication involving 10 * i digits. Total count of all digits is then given as sum or linear series, which would be approx. 10*N*(N-1)/2 (10 obviously being the number of digits). That gives complexity O(N^2) for computing the product of N numbers.

I am not sure if I agree with this... keep in mind that N represents the number of 10 digit numbers. And each 10 digit number needs to be multiplied by an interim product value (tracking the product so far until it reaches the total product). First, this interim product value starts off as 10 digits. And second, it actually grows faster than N. I can only weakly argue that you can't ignore the growth (meaning that the size of the interim product starts small), but I can strongly argue that you can't assume that multiplying any 10 digit with the interim product as 10*N (since the interim product is growing faster than N).

Let's assume that the product of any two numbers of K digits has a complexity of O(K^2). This assumption is based on the algorithm that all the digits combinations needs to be multiplied together -- ie. no lookup tables or other tricks allowed to get to the product. I also am using K as I need N to represents the number of 10 digit numbers.

Now, in the case of the product of N numbers, let form the product this way. Let's cut the list in half and do the interim product on the two halves. Then let's do a product of the two halves' interim values. Of course, to do the two halves, we use the same trick recursively. The total number of multiples that need to be performed is N-1, with about half of them being a product of two 10-digits numbers, and half of that being the product with it's results, and half of that with those results, etc., until only one multiply is needed with the final two largest results.

So, how to do the relationship of K to N? My gut at this point says it is logarithmic, but I am not sure, so hopefully someone will correct me. I purposely chosen a recursive solution as there seems to be somewhat a relationship between the K and the size of its result, and keeping the Ks close to equal in size may help. Anyway, given the assumption (which may be wrong), then the complexity to take the total product should be O(N * (log(N) ^ 2)).

Henry Wong wrote:I am not sure if I agree with this... keep in mind that N represents the number of 10 digit numbers. And each 10 digit number needs to be multiplied by an interim product value (tracking the product so far until it reaches the total product). First, this interim product value starts off as 10 digits. And second, it actually grows faster than N. I can only weakly argue that you can't ignore the growth (meaning that the size of the interim product starts small), but I can strongly argue that you can't assume that multiplying any 10 digit with the interim product as 10*N (since the interim product is growing faster than N).

Obviously the product of N ten-digit numbers grows faster than N. But the number of digits of the product grows (very roughly) proportionally to N. Or just a bit smaller - depending, actually, on the distribution of the ten digit numbers. Assuming even distribution of values, the average factor will be something like 4,499,999,999.

But this assumes taking the numbers and duly multiplying them one after another with the interim result. I didn't even try to think of better way of doing so, and I think your proposal should make it better. At first blush, even the complexity looks right (I mean the log(N) squared - it is certainly more than log(N), since operations further up of the tree are more expensive), but I'd have to think more about it to be sure.

Henry Wong wrote:Let's assume that the product of any two numbers of K digits has a complexity of O(K^2).

Yes, that's what the Wikipedia article about Computational complexity of mathematical operations says, for the method you learned in elementary school. But there are better algorithms than that; see the article.