|
The
Latest Issue of Code Journal
(Back to Code
Journal Main)
Code Journal is a free, biweekly newsletter on programming
and computer science provided jointly by Cprogramming.com
and AI Horizon. There is also an archive
of all past issues in both HTML
and text formats.
This is the March 11th Issue. (If you want a pure text version,
go to the text archive
of Code Journal.)
CODE
JOURNAL: Your Guide to Programming
March 11, 2002
In This Edition:
- Welcome to the Code Journal
- Handling Errors Exceptionally Well in C++
- Time Flies Like an Arrow, but Time Algorithms
with Big-O
- Programming
Challenge
Welcome to the Code Journal, a joint
venture between Cprogramming.com
and AI Horizon that aims to
provide insightful articles on both C++ and algorithmic programming.
Code Journal is helpware: in return for reading it, you are asked
to help someone else out with their own programming problems. Good
luck, and quick compiling.
---------------------------------------------------------
C/C++ Programming by Alex Allain
---------------------------------------------------------
Handling Errors Exceptionally Well in C++
One handy feature of C++ is its exception handling
system. An exception is a situation in which a program has
an unexpected circumstance that the section of code containing the
problem is nor explicitly designed to handle. In C++, exception handling
is useful bcause it makes it easy to separate the program design between
code written to handle the chores of the program and code written
to handle errors; doing so makes reading the code much easier. Furthermore,
exception handling in C++ propagates the exceptions up the stack;
therefore, if there are several functions called, but only one function
that needs to reliably deal with errors, the method C++ uses to handle
exceptions means that it can easily handle those exceptions without
any code in the intermediate functions.
When errors occur, the function generating the error can 'throw' an
exception. For example, take a sample function that does division:
const int DivideByZero = 10;
double divide(double x, double y)
{
if(y==0)
throw DivideByZero;
return x/y;
}
The function will throw DivideByZero as
an exception that can then be caught by an exception-handling catch
statement that catches exceptions of type int.
The necessary construction for catching exceptions is a try-catch
system. If you wish to have your program check for exceptions, you
must enclose the code that may have exceptions thrown in a try
block. For example:
try
{
divide(10, 0);
}
catch(int i)
{
if(i==DivideByZero)
cerr << "Divide by zero error";
}
The catch statement catches exceptions that are
of the proper type. You can, for example, throw
objects of a class to differentiate between several different exceptions.
As well, once a catch statement is executed, the
program continues to run from the end of the catch.
It is often more useful for you to create a class that stores information
on exceptions as they occur. For example, it would be more useful
if you had a class to handle exceptions.
class DivideByZero
{
public:
double divisor;
DivideByZero(double x);
};
DivideByZero::DivideByZero(double x)
: divisor(x)
{}
int divide(int x, int y)
{
if (y==0)
throw DivideByZero(x);
}
try
{
divide(12, 0);
}
catch (DivideByZero divZero)
{
cerr << "Attempted to divide " << divZero.divisor;
cerr << " by zero";
}
Of course, it's also possible to have a general exception handler
that will respond to any thrown exception. To use it, simply use catch(...)
for the final catch statement.
The handy thing about exception handling is that the errors can be
handled outside of the regular code. This means that it is easier
to structure the program code, and it makes dealing with errors more
centralized. Finally, because the exception is passed back up the
stack of calling functions, you can handle errors at any place you
choose.
************************************
Alexander Allain is the webmaster of Cprogramming.com.
Contact him at [email protected]
---------------------------------------------------------
Algorithms and Programming by Eric Suh
---------------------------------------------------------
Time Flies Like an Arrow, but Time Algorithms with Big-O
In Computer Science, there is often a need to measure the speed of
an algorithm. In this way, we can compare the efficiency of different
algorithms, and so make better and better ones.
How, then, does one go about finding the speed of an algorithm? Maybe
an experimental method would work. Implement the algorithms on one
computer, and then run all of them and time them. The one with the
smallest running time would be the most efficient.
This method seems great and pretty simple, but there are several flaws
with it. First, the performance of the machine can vary greatly from
trial to trial, because it is very difficult to create an identical,
clean environment for every run. Secondly, algorithms can be slower
or faster depending on the architecture of the operating system, the
hardware, and even the compiler or interpreter. Some systems can simply
handle certain operations better than others, while hardware and software
often make optimizations to code that can really affect performance.
Third, what would the size of the testing data set be? Often, the
"faster" algorithms will be about the same or even worse than the
"slower" algorithms for small data sets. However, if a large data
set is used, then testing will take a very long time, and small independent
factors can have a much greater effect on the computational speed.
All in all, it is better to get a theoretical, abstract way of defining
the efficiency of an algorithm. Let us define the number n
to be a designation for the "size" of the problem. In sorting, this
might be the number of elements to be sorted. In searching, this would
be the number of items to be searched through. It could be the number
of bits in number, or just about anything that determines how long
the algorithm will take.
The theoretical abstract way of defining efficiency is called Big-O
notation. This notation is written as O(f(n)),
where f(n)
is some function of the variable n
which we defined above.
The notation O(f(n))
means that for large enough values of n,
the exact number of theoretical operations (be it comparisons or data
writing or something else entirely) the algorithm must do (designated T(n)) is less than f(n)
multiplied by some constant multiplier. That is,
T(n) <= k × f(n), where k
is some constant.
The algorithm with T(n)
operations would then be considered O(f(n)).
The Big-O stands for "order of magnitude",
a mathematical concept of how "fast" something changes for very large
values.
Of course, if you think about it, you could say that all algorithms
are then O(infinity). This is
true, of course, but it doesn't help us in our analyzing an algorithm.
So, by convention, f(n) is usually made as small and as simple as
possible, so that we can quickly get much information about the speed
of an algorithm out of the notation.
For instance, some of the most common values for f(n)
are: 1, n, n^2, n^3, n*log2(n), log2(n), e^n, 2^n.
(The caret '^' sign means "to the power of". So, n^2
is "n squared." The log2(n) means "Logarithm
to the Base 2 of n"). For sufficiently large n,
those functions can be ordered from fastest to slowest.
O(1)
O(log2(n))
O(n)
O(n*log2(n))
O(n^2)
O(n^3)
O(2^n)
O(e^n)
When Computer Scientists began to use this notation and analysis often,
they began to encounter some problems for which algorithms were always O(2^n) or something large like that.
They found that these problems all seemed to have similar situations
appearing. Thus, Computer Scientists began to distinguish between Polynomial Time problems and Non-deterministic
Polynomial Time problems (or in short, P and NP problems).
P problems are those problems for which a solution can be found that
is O(n^x) or faster, where x
is some integer. n^x is a polynomial
function, so these problems are called Polynomial Time problems.
In NP problems, the solution to a problem can be checked in Polynomial
time. That is, if you have a possible solution to an NP, checking
to see if the solution is correct is a P problem. So, all P problems
are a subset of NP. Sorting an array is a P-problem, and checking
to see if an array is sorted is also a P-problem: you just see if
the elements are in order.
Not all NP problems are necessarily P problems, however. Think about
factoring a number: the best algorithm anyone can think of right now
involves simply checking all of the numbers from 2 onwards and dividing.
That algorithm is an exponential time solution, but checking to see
if the solution is correct is easy: you just multiply the factors
and check if you get the correct number. No one knows, however, whether
this problem can be done in polynomial time or not; maybe we just
haven't gotten a smart enough programmer. Or maybe it just isn't possible
to do it in polynomial time. The big question of this century and
the next for Computer Science is, "Is NP = P?"
It has not been proven either right or wrong, even though there is
a million dollar prize waiting for anyone who proves it one way or
the other.
Who knows, maybe you will be the one to prove it.
************************************
Eric Suh is the webmaster of AI
Horizon, a site devoted to Artificial Intelligence and Computer
Science programming.
Contact him at [email protected].
---------------------------------------------------------
Code Challenge
---------------------------------------------------------
Every issue, we will issue a programming challenge and ask people
to submit their solutions within two weeks. A few of the best solutions
will be published the next issue, along with a new challenge.
Previous Issue's Challenge
------------------------------------
One famous chess problem is the Knight's Tour,
in which a knight, placed on a starting square, is then moved to every
other square on the chessboard. The Knight's Tour is a fairly typical
chess problem. Your challenge is not only to write a program to calculate
the Knight's Tour from any square on the board, but also to allow
the user to add up to five pieces to the board that the moving knight
must avoid capturing and must also avoid being moved onto a square
where it could be captured. As an interesting note, in The Psychology
of Chess, the authors talk about a test for chess talent that
works under similar conditions and is based on the speed of response.
The test accurately predicated a future grandmaster.
HINT: Use recursion.
Solutions
------------------------------------
We only received one response to last weeks challenge. Mike McInnis
submitted a standard knight's tour program rather than the modified
version; but given the dearth of responses we are publishing his response:
http://www.cprogramming.com/source/mm-knight.cpp
This Issue's Challenge
------------------------------------
Write the shortest quicksort algorithm you can. It's possible to do
it, minus the variable declarations, in roughly four lines. Winners
will be selected based on length of code.
Send your solutions to [email protected]
as source code files, and you may find it published. Please include
either your name or an identifying username so that we may attribute
the solution to you in the next newsletter. If you wish, you may ask
us to withhold your name.
---------------------------------------------------------
Suggestions and comments on this newsletter should be sent to [email protected]
or [email protected].
Editors:
Eric Suh, [email protected]
Alexander Allain, [email protected]
To unsubscribe from this journal, send a blank email to [email protected]. |
|
|