开发工具:
文件大小: 2mb
下载次数: 0
上传时间: 2019-09-04
详细说明:Chapter 0 Prologue
Chapter 1 Algorithms with numbers
Chapter 2 Divide-and-conquer algorithms
Chapter 3 Decompositions of graphs
Chapter 4 Paths in graphs
Chapter 5 Greedy algorithms
Chapter 6 Dynamic programming
Chapter 7 Linear programming and reductions
Chapter 8 NP-complete problems
Chapter 9 CopS. Dasgupta, C H. Papadimitriou, and u.V. vazirani
13
1. Is it correct?
2. How much time docs it takc. as a function of n?
3. And can we do better?
The first question is moot here, as this algorithm is precisely Fibonaccis definition of Fn
But the second demands an answer. Let I(m) be the number of computer steps needed to
compute
(n); what can we say about this function? For starters, if n is less than 2, the
procedure halts almost immediately, after just a couple of steps. Therefore,
T(m)≤2form≤1
For larger values of n, there are two recursive invocations of
taking time T(n-1) and
T(n-2), respectively, plus three computer steps(checks on the value of n and a final addition)
Therefore,
T(m)=T(m-1)+T(m-2)+3for
Compare this to the recurrence relation for Fn: we immediately see that T(n> Fn
This is very bad news: the running time of the algorithm grows as fast as the Fibonacci
numbers! T(n)is exponential in n, which implies that the algorithm is impractically slow
except for very small values of n
Let's be a little more concrete about just how bad exponential time is. To compute F200
the
algorithm executes T(200)> F200> 2 o elementary computer steps. How long this
actually takes depends, of course, on the computer used. At this time, the fastest computer
in the world is the NEC Earth Simulator, which clocks 40 trillion steps per second. Even on
this machine,
(200) would take at least 29] seconds. This means that, if we start the
computation today, it would still be going long after the sun turns into a red giant star
But technology is rapidly improvingcomputer speeds have been doubling roughly every
18 months, a phenomenon sometimes called Moore's law. With this extraordinary growth
perhaps
will run a lot faster on next ycar's machines. Let's see- the running time of
(n) is proportional to 20.6947 A(1.6)m, so it takes 1.6 times longer to compute Fn+1 than
Fn. And under moores law, computers get roughly 1.6 times faster each year. So if we can
reasonably compute F100 with this year's technology, then next year we will manage F101. And
the year after, F102. And so on: just one more Fibonacci number every year! Such is the curse
of exponential time
In short, our naive recursive algorithm is correct but hopelessly inefficient. Can we do
better?
a polynomial algorithm
Let's try to understand why
is so slow. Figure 0.1 shows the cascade of recursive inve
cations triggered by a single call to
(n). Notice that many computations are repeated
A more sensible scheme would store the intermediate results-the values Fo, Fl,..., Fn-1
as soon as they become known
14
Algorithms
Figure 0.1 The proliferation of recursive calls in
F
3
n-4
5
0
0
0=01
i=2
As with
the correctness of this algorithm is self-evident because it directly uses the
definition of Fn. How long does it take? The inner loop consists of a single computer step and
is executed n 1 times. Therefore the number of computer steps used by
is linear in n
From exponential we are down to polynomial, a huge breakthrough in running time It is now
perfectly reasonable to compute /200 or even F200, 000. I
As we will see repeatedly throughout this book, the right algorithm makes all the differ
ence
More careful analysis
In our discussion so far, we have been counting the number of basic computer steps executed
by each algorithm and thinking of these basic steps as taking a constant amount of time.
This is a very useful simplification. After all, a processor's instruction set has a variety of
basic primitives-branching, storing to memory, comparing numbers, simple arithmetic, and
lee to better appreciate the importance of this dichotomy between exponential and polynomial algorithms, the
der may want to peek ahead to the story of sissa and moore, in Chapter 8
S. Dasgupta, C.H. Papadimitriou, and U.V. vazirani
15
so on-and rather than distinguishing between these elementary operations, it is far more
convenient to lump them together into one category
But looking back at our treatment of Fibonacci algorithms, we have been too liberal with
what we consider a basic step. It is reasonable to treat addition as a single computer step if
small numbers are being added, 32-bit numbers say. but the nth Fibonacci number is about
0. 694n bits long, and this can far exceed 32 as n grows. Arithmetic operations on arbitrarily
large numbers cannot possibly be performed in a single, constant-time step We need to audit
our earlier running time estimates and make them more honest
We will see in Chapter 1 that the addition of two n-bit numbers takes time roughly propor
tional to n; this is not too hard to understand if you think back to the grade-school procedure
for addition which works on one digit at a time. Thus
which performs about fn ad-
ditions, actually uses a number of basic steps roughly proportional to nFn. Likewise, the
number of steps taken by
is proportional to n, still polynomial in n and therefore ex
potentially superior to
This correction to the running time analysis does not diminish
our breakthrough
But can we do even better than
2 Indeed we can: see exercise 0.4
0.3 BigO notation
We've just seen how sloppiness in the analysis of running times can lead to an unacceptable
level of inaccuracy in the result. but the opposite danger is also present: it is possible to be
too precise. An insightful analysis is based on the right simplifications
Expressing running time in terms of basic computer steps is already a simplification. After
all, the time taken by one such step depends crucially on the particular processor and even on
details such as caching strategy (as a result of which the running time can differ subtly from
one execution to the next). Accounting for these architecture-specific minutiae is a nightmar
ishly complex task and yields a result that does not generalize from one computer to the next
It therefore makes more sense to seek an uncluttered, machine-independent characterization
of an algorithm s efficiency. To this end, we will always express running time by counting the
number of basic computer steps, as a function of the size of the input
And this simplification leads to another. Instead of reporting that an algorithm takes, say,
5n+4n+3 steps on an input of size n, it is much simpler to leave out lower-order terms such
as 4n, and 3(which become insignificant as m grows), and even the detail of the coefficient 5
in the leading term(computers will be five times faster in a few years anyway), and just say
that the algorithm takes time O(n )(pronounced"big oh of n 3")
It is time to define this notation precisely. In what follows, think of/(n) and g(n) as the
running times of two algorithms on inputs of size n
Let f(n) and g(n) be functions from positive integers to positive reals. We say
f=o(g(which means that "f grows no faster than g if there is a constant c>0
such that f(m)≤c·9(m)
Saying
f-O(g) is a very loose analog of"f b: for instance, n2 dominates n
3. Any exponential dominates any polynomial: 3 dominates n(it even dominates 2n)
4. Likewise, any polynomial dominates any logarithm: T dominates (log n ). This also
means, for example, that n- dominates n log n
Dont misunderstand this cavalier attitude toward constants. Programmers and algorithm
developers are very interested in constants and would gladly stay up nights in order to make
an algorithm run faster by a factor of 2. But understanding algorithms at the level of this
book would be impossible without the simplicity afforded by big-O notation
18
algorithms
Exercises
0.1. In each of the following situations, indicate whether f=O(g), or f=Q(9), or both(in which case
f=⊙(9)
(a)m-100
m-200
(b)m1/2
2/3
(c) 100n+ log nn+(log n
(d)
g
g
log 3
(f 10 log
(g)
g
(h) n/log n
7(og7
(i nm
0.1
(log n)lo
g(log n)
/log
(log n)
(1)
p8 n
k+1
0. 2. Show that, if c is a positive real number, then g(n)=l+c+c+,.+cn is
(a)e(1)ifc<
)0(en)ifc>1
The moral: in big-o terms, the sum of a geometric series is simply the first term if the series is
strictly decreasing, the last term if the series is strictly increasing, or the number of terms if the
series is unchanging.
0. 3. The Fibonacci numbers Fo, F1, F2,.. are defined by the rule
FI
Fn=Fn-1+ F
In this problem we will confirm that this sequence grows exponentially fast and obtain some
bounds on its growth
(a) Use induction to prove that F≥205 n for nm≥6
(b) Find a constant c< 1 such that Fn s 2cn for all n,20 Show that your answer is correct
(c) What is the largest c you can find for which Fn=Q(2Cm)?
0. 4. Is there a faster way to compute the nth Fibonacci number than by
(page 13)? One idea
involves matrices
We start by writing the equations Fi= F1 and F2= Fo+ Fi in matrix notation
F
FC
F1
S. Dasgupta, C.H. Papadimitriou, and U.V. vazirani
19
Similarly,
F
F
FF
and in general
(An)=()-(A)
So, in order to compute Fn, it suffices to raise this 2 x 2 matrix, call it X, to the nth power.
(a) Show that two 2 x 2 matrices can be multiplied using 4 additions and 8 multiplications.
But how many matrix multiplications does it take to compute Xn?
(b) Show that O(log n) matrix multiplications suffice for computing Xm. (Hint: Think about
computing XS
Thus the number of arithmetic operations needed by our matrix-based algorithm, call it
just O(log n), as compared to O(n)for
Have we broken another exponential barrier
The catch is that our new algorithm involves multiplication, not just addition; and multiplica
tions of large numbers are slower than additions we have already seen that, when the complex
ity of arithmetic operations is taken into account, the running time of becomes O(n2)
(c) Show that all intermediate results of
are O(n) bits long
(d)Let M(r) be the running time of an algorithm for multiplying nl-bit numbers, and assume
that M(m)=O(n2)(the school method for multiplication, recalled in Chapter 1, achieves
this) Prove that the running time of is o(M(n) log n)
(e) Can you prove that the running time of is O(M(n))?(Hint: The lengths of the num
bers being multiplied get doubled with every squaring.)
In conclusion whether
is faster than
depends on whether we can multiply n-bit
integers faster than O(2). Do you think this is possible?(The answer is in Chapter 2.
Finally, there is a formula for the Fibonacci numbers
1+√5
2
So, it would appear that we only need to raise a couple of numbers to the nth power in order to
compute Fn. The problem is that these numbers are irrational, and computing them to sufficient
accuracy is nontrivial. In fact, our matrix method
can be seen as a roundabout way of
raising these irrational numbers to the nth power. If you know your linear algebra, you should
see why (Hint: What are the eigenvalues of the matrix X?)
Chapter 1
Algorithms with numbers
One of the main themes of this chapter is the dramatic contrast between two ancient problems
that at first seem very similar
Factoring: Given a number N, express it as a product of its prime factors
Primality: Given a number N, determine whether it is a prime
Factoring is hard. Despite centuries of effort by some of the worlds smartest mathemati-
cians and computer scientists, the fastest methods for factoring a number n take time expo
nential in the number of bits of n
On the other hand, we shall soon see that we can efficiently test whether n is prime
And (it gets even more interesting)this strange disparity between the two intimately related
problems, one very hard and the other very easy, lies at the heart of the technology that
enables secure communication in today s global information environment
En route to these insights, we need to develop algorithms for a variety of computational
tasks involving numbers. We begin with basic arithmetic, an especially appropriate starting
point because, as we know the word algorithms originally applied only to methods for these
proble
es
1.1 Basic arithmetic
1.1.1 Addition
We were so young when we learned the standard technique for addition that we would scarcely
have thought to ask why it works. But let's go back now and take a closer look
It is a basic property of decimal numbers that
The sum of any three single-digit numbers is at most two digits long.
Quick check: the sum is at most 9+9+9=27, two digits long In fact this rule holds not just
in decimal but in any base b> 2(Exercise 1.1).In binary, for instance, the maximum possible
sum of three single-bit numbers is 3 which is a 2-bit number
21
(系统自动生成,下载前可以参看下载内容)
下载文件列表
相关说明
- 本站资源为会员上传分享交流与学习,如有侵犯您的权益,请联系我们删除.
- 本站是交换下载平台,提供交流渠道,下载内容来自于网络,除下载问题外,其它问题请自行百度。
- 本站已设置防盗链,请勿用迅雷、QQ旋风等多线程下载软件下载资源,下载后用WinRAR最新版进行解压.
- 如果您发现内容无法下载,请稍后再次尝试;或者到消费记录里找到下载记录反馈给我们.
- 下载后发现下载的内容跟说明不相乎,请到消费记录里找到下载记录反馈给我们,经确认后退回积分.
- 如下载前有疑问,可以通过点击"提供者"的名字,查看对方的联系方式,联系对方咨询.