*All book pages refer to: Koffman and Wolfgang, *Objects, Abstraction, Data Structures, and Design using Java, ver 5.0*.

Week 1 | Tues, Sept 2 | Week 4 | Tues, Sept 23 | Week 7 | Tues, Oct 14 | Week 10 | Tues, Nov 4 | Week 13 | Tues, Nov 25 | ||||

Thurs, Sept 4 | Thurs, Sept 25 | Thurs, Oct 16 | Thurs, Nov 6 | Thurs, Nov 27 | |||||||||

Week 2 | Tues, Sept 9 | Week 5 | Tues, Sept 30 | Week 8 | Tues, Oct 21 | Week 11 | Tues, Nov 11 | Week 14 | Tues, Dec 2 | ||||

Thurs, Sept 11 | Thurs, Oct 2 | Thurs, Oct 23 | Thurs, Nov 13 | Thurs, Dec 4 | |||||||||

Week 3 | Tues, Sept 16 | Week 6 | Tues, Oct 7 | Week 9 | Tues, Oct 28 | Week 12 | Tues, Nov 18 | Week 15 | Tues, Dec 9 | ||||

Thurs, Sept 18 | Thurs, Oct 9 | Thurs, Oct 30 | Thurs, Nov 20 |

.

Introduction, overview of course, grading scheme. Slide set #1: Introduction.

So why Java?—"Write once, run anywhere" (Sun slogan). We talked about the JVM, portability of Java programs, and fact that no other language can run content directly from the web (JVM included with browsers).

Next we examined the header of Java's "main" method: "public static void main(String args[])". What does "static" mean? Are static methods and fields (variables) accessible with or without instantiating a class? How do you create an object, what does a constructor look like, and what happens to static variables (or "class variables") each time a new object is created? What about non-static variables (or "instance/object variables")? The lab assignment relates to these concepts, and you should be able to write a program that will print out the total number of instances created thus far *each time you instantiate a new object*.

Also, we discussed hierarchies, inheritance, downcasting, and upcasting. A key point that we tried to grasp was "a Child is a Parent" (pg 128). This means that when a child/subclass is created ("public class Child extends Parent"), it inherits the same fields and methods of the parent/superclass. Therefore, the child will contain and be able to do everything that the parent has/can do, plus more, since the child will most likely *override* the parent's methods and fields (or add new ones). If we construct an object of type Child but we use its parent's constructor, "Parent pc = new Child()", what happens in memory? We should know that since the Child() constructor is being used, memory is allocated for all of the Child's fields and methods. HOWEVER, since we created the object as type Parent, only the fields and methods of Parent are allowed to be used.
This is known as "upcasting", for we going up in the hierarchy as we cast from a more specific type to a more general type. The fields and methods of Child are no longer "visible" (they can't be invoked/called unless Parent has them too), but they do exist is memory. But what if we decide we want to cast our object "Parent pc" as the subclass, Child? "Downcasting" is the opposite of upcasting, and it is possible in this situation ONLY because the object contains hidden fields in memory that can be uncovered when we cast it "down" the hierarchy to type Child: "Child jkl = pc". If "pc" was *not* originally created with the Child *constructor*, we would *not* have the option to downcast it to type Child since the fields required for a Child would have never existed in memory.

The German word of the week: Erfrischungskaltgetränk (soda).

No slides this time.

First we talked about the Greek word of the week, "Polymorphism". This means that a method will behave differently depending on which subclass it belongs to, because each subclass can redefine/implement the superclass's methods differently. This is known as "overriding" a superclass's method. If we create a class "Drawable" with an empty method called "draw()", and then create subclasses "Rectangle" and "Circle", we would want to write a different draw() method for each subclass to draw both a rectangle and a circle, thus overriding the original superclass method.

To illustrate this point, we can define an array called "shapeArray[ ]" of type Drawable and fill it with objects that are *subclasses* of Drawable (Rectangle, Circle). Since we know the rule "a Child is a Parent", we know that Rectange and Circle "are" Drawables, and can therefore fit into an array of type Drawable. This relates to polymorphism because you can write this line of code: "shapeArray[#].draw()", and a different result will occur at runtime depending on which object type (Rectangle or Circle) is in that cell # of the array. Any element in the shapeArray can invoke the "draw" method since class Drawable contains a "draw()" method (probably an "abstract" method—see last paragraph...), but we will see different outcomes because draw() has been overridden by the subclasses Rectangle and Circle.

The major point to remember is that the JVM decides which class's "draw()" method to call based on the class type used in *constructing* the object—not the class type of the variable that *references* the object. That means we can instantiate a rectangle with either "Drawable rect1 = new Rectangle()", or with "Rectangle rect2 = new Rectangle()", but these instances will both contain the structure of a Rectangle in memory due to the *constructor* used to make it. The *reference variable type* ("Drawable" for rect1 and "Rectangle" for rect2) is used to help categorize the objects depending on how we will use them, for we saw that it could be advantageous to have the shapeArray[ ] (type Drawable) refer to objects of different subclass types to perform polymorphic actions. (For more clarification, see "Polymorphism", pg 136).

Abstract methods are those that have not been implemented (contain no body, only the header). In the previous example, the method "draw()" would most likely be an abstract method, since only subclasses of Drawable represent shapes you can actually draw. To signify an abstract method, add the word "abstract" to the method header (before the return type). If a class contains an abstract method, it MUST be declared as an *abstract class*. Can abstract classes be instantiated, and why or why not? A subclass of an abstract class ("extends" the abstract class) will usually provide an implementation (method body) for the abstract method. In our example, Rectangle and Circle would both implement draw() differently so that a rectangle or a circle would be drawn on the screen. For more about abstract methods, see Java Docs - Abstract Methods and Classes, or pg 138 in book.

.

To finish up the lecture from last week, we discussed interfaces and their role in event-driven programming. Interfaces act as a kind of template or "contract" that specifies a set of related methods that a class can implement. Interfaces are written as "public interface MyInterface {..." followed by method headers (called signatures—no curly braces, just a semicolon) and possibly constants ("final" in field declaration), but not variables. Because there are no method implementations, interfaces can NEVER be instantiated. Abstract classes also can never be instantiated, but they differ from interfaces because they may have some methods implemented/defined and can contain variables. Remember that only SINGLE inheritance is allowed with classes, but interfaces are not *inherited* (no "extends"). Instead, ONE OR MORE interfaces can be *implemented*, as in "public class MyClass *implements* MyInterface". This declaration binds MyClass in a "contract" that requires it to implement ALL of the methods from the interface. (For more on the comparisons between interfaces and abstract classes, see Table 3.1 on pg 142)

So why do we need interfaces? We will see their function after introducing the Event Model.

Slide set #3: Events shows graphical representations of traditional sequential programming vs. event-driven programming (Java[script], Flash, etc). With sequential programming, the program must somehow prompt for input and wait until it is received and processed before any more input can be entered. In event-driven programming, events (input) can occur at an undetermined time, and the results from these events are executed almost simultaneously while still allowing for more events to occur and be processed. Think about how the graphical environment on your computer relates to this model.

A very notable feature of event-driven programming is that the control of the program's execution is given to the USER. Since events can be processed at almost any time, the user can decide when and what he wants to do next, giving him a much more prominent role in the entire program experience. This is why we call visually-based programs Graphical User Interfaces (GUI—pronounced "GOO-ee"). Although building GUIs is more work for the programmer, the user will have an easier time using the product. Usability still remains a key point to focus on when you are designing a GUI, for I'm sure you've all experienced GUIs that aren't very usable.

So how do we deal with events in Java? Let's dissect what happens when a button on a Java GUI is pressed. We'll pretend the button causes the frame (window) background to change color.

EVENT = Button Pressed. EVENT SOURCE = The Button. EVENT LISTENER = The Frame.Due to the nature of the Button object, clicking on it automatically triggers an "event" to be created and passed to any objects/classes that are "registered" listeners of the button. We accomplish this "registration" with the following method: "Button.addListener(theFrame)". The frame (theFrame) will now automatically react with the appropriate actions when the button is clicked on. Therefore, theFrame

Syntax: "public class theFrame implements ActionListener {..."

ActionListener is an interface that defines but does not implement the method "actionPerformed()", so when we implement the ActionListener interface we are entering a contract requiring that actionPerformed() be implemented. In our example here, the implementation of actionPerformed() will contain code to change the color of the frame's background.

German word of the week: Kaffeesahnenäpfchendeckelsammeln (Collector of coffee cream pot lids)

We first reviewed the main rule when registering event listeners: *The source registers the listener*.

Next we programmed an event-driven Java program where a button would cause text to print out in the console. If you want to create any graphical objects in Java, you usually start with a JFrame. The JFrame is displayed as a regular window on your desktop. To create a window with a button in it, we will first create a class called MyProject that "extends" JFrame. Remember that "extends" creates a Child-Parent relationship, and therefore our subclass MyProject "is a" JFrame. This means that MyProject can invoke (call) any method that JFrame is allowed to use, such as the method call, "[this.]setDefaultCloseOperation(EXIT_ON_CLOSE);" Here we are telling the JFrame that when the user clicks on the window's (close) button, the window should exit. If you want to learn about other methods/fields that the class JFrame—or any other class—contains, go to the Java API documentation (bookmark it!). This website is one of the best resources for finding methods and checking syntax, but the book generally shows more examples of code and uses language that is easier to understand in my opinion. Appendix C (pg 761 in the book) deals with events and GUI objects, and JFrame can be found on pg 775.

After we convert our class MyProject into a JFrame, we can attach/add graphical *components* (such as a JButton) to the frame to be displayed. But how will we create this button? Do we need to make another class? The answer is that you will usually write one class that extends JFrame, and then just create/instantiate objects (more specific: *components*) within this class's constructor method. So for this example, we only need to write one class, MyProject, and we can just put our main() in this class. So right now MyProject contains a constructor, "public MyProject() {..." and a main method "public static void main(String[] args)". The only point of the main method will be to instantiate the MyProject class, therefore calling the constructor and creating our JFrame along with whatever we specify in the constructor. We accomplish this instantiation in main() by writing: "new MyProject()". Note that in this constructor call we *aren't storing *the created instance of MyProject in a variable (as in "MyProject mp = new MyProject()"). This is because we only want to run the class's constructor, and we don't need to do anything else with it in main. If you create a variable such as "MyProject mp" in main, the compiler expects that you want to use this variable and you would see a warning in Netbeans (warnings don't necessarily affect the program). So our main is essentially saying, "run the constructor method since everything we want to happen stems from there".

So we now know that our *constructor* is the method we will be focusing on when we create the JFrame and everything that goes in it. The button will be instantiated in the constructor as "Button clickButton = new Button("Click me!")". On Tuesday we went over the syntax for "registering" an eventListener to the button (our JFrame is the listener), accomplished with the addListener() method. We also know that MyProject needs to implement ActionListener, and therefore must contain the method "void ActionPerformed(ActionEvent e)". Anything that we want to happen when we click on the button (such as a printout in this example) will be written in ActionPerformed.

The steps to building a GUI, along a code sample for reference, can be found on slides 5 & 6 of Slide set #4: GUI.

.

Today we continued with Slide set #4: GUI, and reviewed the steps to building a GUI once more. These steps are:

1) Import packages 2) Set up a top-level container 3) Apply layout 4) Add components 5) Register listeners 6) Show it to the world!

Details (see the slides for more detail and samples of GUI elements):

1) Import packages: Remember that Java doesn't recursively import packages (it will only import classes from the specified directory, not subdirectories). Ex: "Javax.swing.*;" will not import the classes in the folder "Javax.swing.event.*;" so it is necessary to declare both imports.

2) Top-level container: Options are JFrame, applet, or dialogue. We will almost always use a JFrame in this class, and usually our main class (not necessarily *named* "Main") will extend JFrame.

3) Apply layout: We must call the method "this.getContentPane()" (assuming "this" class extends JFrame) to access the content pane (layer) of the JFrame that is used for holding and arranging the GUI elements. To set a layout, you call the setLayout() method, which belongs to the content pane, so you will write: "this.getContentPane().setLayout(new BorderLayout())". Note that whichever layout type you choose, you must pass an object of that type ("new" or previously declared in a variable) to the setLayout() method.

4) Add components: There are both passive and active components in a Java GUI. An example of a passive component would be something such as text or an image. An active component is one that allows for *user interaction*, such as a JButton or a JSlider. These active components are "basic controls", and to control anything in our GUI they therefore must be EVENT SOURCES. Every active component will own a method called "add_____Listener()", otherwise it wouldn't be able to register listeners. You can check the Java API to see the name of the specific listener method for each component. An example would be the JSlider component, which has a method addChangeListener().

5) Register listeners: We learned about registering listeners last week (week 2). Today we demonstrated a JSlider that would generate a changeEvent (the API will tell you what kind of event a component generates—it will be in the "Fields" section on the component's page). In order for our JFrame class to be an event listener and receive this changeEvent (so that we can change the JFrame's background!), we must implement some interface that has the appropriate method to receive this event. We also must find out what method to call to register the JFrame. The answers are in JSlider's API entry—here we can see the method addChangeListener(ChangeListener), which needs a Changelistener to register as the listener. So clicking on "Changelistener" brings us to the ChangeListener interface page, which only has one method: stateChanged(). Below is a summary of implementing a ChangeEvent from JSlider as opposed to the ActionEvent from JButton.

JButtongenerates anActionEvent, and callsaddActionListener(JFrame), so JFrame must implementActionlistenerwith the methodactionPerformed(ActionEvent e)

JSlidergenerates aChangeEvent, and callsaddChangeListener(JFrame), so JFrame must implementChangelistenerwith the methodstateChanged(ChangeEvent e)

Check out this JSlider example code from class, which demonstrates how to process the ChangeEvent in the stateChanged() method.

6) Show it to the world: You need to add two methods, pack(); and setVisible(true); in order to display your components!

Also, we talked briefly about RGB color values, and this online RGB Java applet gives a good demonstration, and can also be used when you need to create your own color values.

Some basics: RGB Color = (#Red,#Green,#Blue); White = (255,255,255); Black = (0,0,0); Gray = (130,130,130); Red = (255,0,0); Dark Red = (130,0,0); Light Red/Pink = (255,130,130);

We will learn about creating color objects and using them in custom painting on Thursday.

Today we looked at an updated version of the JSlider example program from Tuesday. This time, instead of extending JFrame for our primary class, we created a new JFrame object within the constructor, and we made our class extend JPanel instead (we'll see why in the next paragraph). But we still made this class implement the interface ChangeListener with "implements ChangeListener". This is because we want to write/implement the essential StateChanged() method here in the primary class so we can override it for our own purposes.

So how do we add custom graphics to a JFrame? First we need to add something called a JPanel (usually we'll have a subclass of JPanel that we modify as our primary class) to the Content Pane ([JFrame].getContentPane.add([JPanel/this]). JPanels are special because when you add a JPanel to a JFrame, the "window manager" will automatically call the JPanel's paintComponent() method whenever your window is displayed/resized on your screen. So in a sense, paintComponent() acts as a "refresh screen" command for everything within that JPanel. The paintComponent() method can be modified/overridden so that we can explicitly call the paint() method for all of the objects that we want to paint (as seen in Lab 3's GraphicsEngine class).

The paintComponent() method is written as paintComponent(Graphics g), because the JPanel class automatically imports this Graphics object to allow you to invoke any of the graphical drawing methods. You can then pass "g" to any other class, object, or method that wants to draw on the JPanel. As we see in the code for GraphicsEngine, our paintComponent() method passes the Graphic object "g" to the paint() method of each object that we created and placed into the GraphicObject array, "objectList" (done in an external class called GraphicControl--explained in Lab 3). Why are we invoking the paint() method in this case? We can see that paint() is defined in the GraphicsObject class, but it's an empty method. The reasoning is that all of our custom shape classes (circle, rectangle, line) will be created as SUBCLASSES of GraphicsObject, so they will inherit its fields and methods, allowing you to OVERRIDE "paint(Graphics g)" in a unique way for each different shape class. Therefore, our array of GraphicObjects will contain *different subclass* objects, causing "objectList[#].paint()" to be a polymorphic call. The output will depend on what kind of shape was instantiated and stored in each array cell. You should look at the API to find out what drawing methods you can invoke with this Graphics g object.

So we now know that JPanel's paintComponent() method gets called automatically upon display or resize of your window. But what if you are using *events* and want the JPanel to refresh every time an event is fired? We know that our actionPerformed(ActionEvent e), or similar method, gets called *every time an event is fired*, so we must invoke paintComponent() from *within* our ActionListener-type method, BUT we won't actually write "paintComponent()". Instead, we must use the call "repaint()" to indirectly access paintComponent(), which it is designed to be called only from the "system"/window manager. The JFrame stores a list of all the components added to it with the "getContentPane.add()" method, and these are all ordered to "redraw themselves " when repaint() is called. In Lab 3, we are only adding one component (our JPanel, GraphicsEngine) to a new JFrame (which is created IN GraphicsEngine.. check the code if you're confused), so it is the only thing being refreshed (via call to paintComponent). And within this JPanel's paintComponent() we control the display of all of our objects.

**In lab (on Friday 9/19) **we created these two classes, GraphicControl and Circle, and there they are for your reference (html-formatted by NetBeans).

The second topic covered in class today was an introduction to performance evaluation of algorithms. We want to be able to predict how "expensive" or time-costly an algorithm is when the number of input elements increases. If it takes 5 seconds to loop through 5 data elements, how long will it take to loop through 25 elements? It all depends on what occurs during the loop, and maybe there are nested loops, etc. The relationship that exists between increasing the number of elements (n) and how long it takes to run is known as the "growth rate", and we can picture a graph to describe the relationship between the time needed and the size of the data set. If the algorithm takes 5 seconds to process 5 elements, and 25 seconds to process 25 elements, and so on, we would be able to draw a straight line on our graph showing that the time needed is a linear function of the number of elements being processed. It is common to use the notation O(n) (pronounced "oh of n", called big-Oh notation) to describe an algorithm that has a linear growth rate. If an algorithm took 25 seconds to process 5 numbers, and 100 seconds to process 10 numbers, that would be a quadratic growth rate, described by O(n^{2}). More on this notation and the importance of measuring growth rates can be found on Fred Schwartz's Big-Oh page . He has written a LOT of good notes relating to Java and programming, and here's his Java Notes index page.

.

No class today.

Today we covered the linked list structure, implemented one in Java, and also introduced the concept of recursion.

A linked list is a *fundamental data structure* (course title, anyone?) in programming, and it is used as the building block for many other structures including stacks and queues. You can think of a linked list as a chain made up of "nodes" that hold data and also have links or pointers to connect them. A linked list can have different forms—it can be singly-linked (each node points forwards to the next node), doubly-linked (each node points forwards *and* backwards), or circularly-linked (can be singly or doubly-linked, but the last and first nodes are connected to form a circle). From outside the singly-linked list, we only have a pointer to the first/head node, for this follows the structural design that each node is connected in the same way (one pointer *to* and *from* each node). A doubly-linked list would have a pointer to the first AND last nodes so that you can travel forwards (starting at the head) or backwards (starting at the tail) depending on the task, since every node points to the previous and next nodes. From within each node, you can only see the data in that node, and the node(s) that it points to. So when traversing a singly-linked list, we have to start at the head (beginning) of the list and make our way through the nodes one-by-one until we reach the node with the desired data. To add a node to the end of a singly-linked list, you travel one-by-one through the list until you reach the node that "points nowhere", signifying that there are no more nodes, and you make that node point to your new node, effectively adding it to the list. Adding a new node to the end of a doubly-linked list it is much quicker because you have a pointer from outside of the list to the tail node, so you don't have to go through the whole thing to find the end. However, doubly-linked lists will take up more memory space because there are two pointers for each node instead of one.

To implement the linked list structure in Java, we wrote the two classes listed below. Note that our method LinkedList.add() DOES NOT deal with adding the FIRST node to the list. We did that in our main(). Looking at the code below, can you tell why you would get an error if you tried to use add() to add the first element to the list? The assignment for lab 3 describes how to modify the add() method to fulfill this purpose, but it is important that you understand why our original add() method would fail.

Recursion is the next topic introduced in class today. A clever definition found on Wikipedia states:

The main idea of recursion is that you will repeatedly enter the same method (or definition entry in this case) until aRecursion:

If you still don't get it, seeRecursion.

In a recursive method/function, you will see a structure similar to the following (pointless) method that will return "1" no matter what positive integer you give it:

int find1 (int n)

{

if (n==1) //"base case", or "stop case" because you stop recursing when you reach this point

return(n);

else

return(find1(n-1)); //"recursive case" because you call the same method again, but give it a new input that will help you get closer to finding your goal

}

Note that "return(findOne(n-1)" is syntactically equivalent to "int r=findOne(n-1); return(r);" because "findOne(n-1)" evaluates to (after it's processed it equals) the return value from that function. There is really no need to store the return value in the integer "r" if we are just going to "return" it in the next line.

The point of calling the same method again from within itself is that we will pass it a new data set, which is a *subset* of the original input value, so that we will eventually (with enough recursions) *fulfill the stop case*. Recursion is very useful in many different cases, including binary searches, computing factorials, and for manipulating our friend, the linked list.
Check out the method "LinkedList.findEnd(Node n)" in the "LinkedList.html" file above to see the recursive method that we wrote in class. This "findEnd()" method is quite similar to the "add()" method, except we reinterpreted the loop and turned it into a recursive call. Most methods with loops can be reformatted into recursive methods, but you should be aware that recursive methods are less intuitive to program/read, and they take up more memory because a new copy of the method's data is placed into memory each time it is recursed. The picture below represents the copies/levels of a recursive method call:

Note that this is the most basic example of recursion, and it is common to find recursive algorithms that have multiple conditions for the recursive case (under the "else"), such as in the binary search.

.

Today we asked a very important question that will help guide a good portion of your studies in computer science:

**Why are there different data structures, and how do we decide which ones to use?**

Answers:

1) Different structures perform differently depending on which specific operations are occurring.

2) Real-world situations are reproduced better by certain data-structures, allowing for flexible modeling of real-world problems.

We arrived at answer #1 by asking, "what do we want to DO with these data structures?" When we have a collection of data, the basic operations that we would perform are: **insert** (add data to our collection), **retrieve** (access the data without moving it), and **remove** (delete data from the collection). To decide which structure we will use, we need to know how well each structure performs each of the aforementioned operations. We already learned that we use big-Oh notation, "O(n)", to describe the performance of an algorithm by gauging its worst-case runtime based on the number "n" of input data elements. Let's look at the runtime for the three operations on a familiar, basic structure: the array.

Array | first | last | arbitrary | reasoning |
---|---|---|---|---|

insert | O(n) | O(n)* | O(n) | Adding to the beginning requires you to shift all the elements, and you always have the possibility of needing a bigger array. |

delete | O(n) | O(1) | O(n) | Deleting from the end is good here because the entire structure does not need to be altered. |

retrieve | O(1) | O(1) | O(1) | Retrieval from any location in an array has an optimal performance of O(1), meaning that it does not take longer when the data set increases. |

*Could be O(1) if there was empty space in preexisting array (and we had the cell number of the empty space). But because big-Oh is a measure of the worst-case performance, we will assume the worst case: we have to make a new array size "2n", and then and copy all "n" elements into it, so this is generalized as O(n).

We can see that an array is advantageous if you are retrieving data and you already know where it's located (since each value is directly indexed, requiring no traversal). But for insertion and deletion, the runtime is not as good. Let's see how it compares to our first new data structure, the linked list. We have learned that linked lists do not have outside pointers to all of the elements, so let's see if there are other benefits of their structures.

Linked List | first | last | arbitrary | reasoning |
---|---|---|---|---|

insert | O(1) | O(1) | O(n) | This assumes that you already have a pointer to the head and tail of the list. Therefore, insertion only requires modification of the adjacent node(s). |

delete | O(1) | O(1)* | O(n) | Deleting also only requires modification of the adjacent node(s), assuming you have a pointer to the head and tail. |

retrieve | O(1) | O(1) | O(n) | Retrieval from any location in an array has an optimal performance of O(1), meaning that it does not take longer when the data set increases. |

*Assuming you have a double-linked list. With a single-linked list (SSL), it would be O(n) because you would have to iterate through all of the nodes so you could correctly update the second-to-last node to have its "next" be null (even if you have a pointer to tail, you can't get to "tail.previous" in a SLL).

You can see here that linked lists are good for adding and removing data to the beginning or end. Because the structure of a linked list is held together internally, manipulating the beginning or end of the data does not change any other part of the structure. Therefore, a linked list would be advantageous if you will have a fluctuation of data, while the array is best for static data with a lot of retrieving. Another advantage that arrays have over linked lists is that they require no "overhead" and are much easier to program quickly. This means that linked lists require extra memory space (and programming time) for each data element so that the nodes can have pointers to the adjacent data elements (arrays don't require this node-linking structure because their elements are sequentially ordered in memory).

Answer #2 deals with the fact that certain data structures can better model certain data sets. For example, if we are trying to determine the best travel route from one city to another, what structure would help you model the problem? Let's look at our data set:

You can see that some of these nodes have more than one line going to/from them, and a linked list is ONLY allowed to have *one* "next" node and *one* "previous" node. The connections among these nodes represent a different data structure, the **graph**. A graph may have cycles (impossible in a list—*except circular lists*) and the connections between the nodes may or may not have a direction. This makes it a good tool for modeling networks of cities, computers, etc. Any linked list could be represented as a graph, but only certain graphs can be represented as a linked list (each node would have to have one predecessor and no internal cycles). The relationship between these two structures is very similar to our inheritance model: linked lists are specialized graphs, so graphs can be seen as a "parent" of the linked list. Therefore, all linked lists "are" graphs, but not vice versa. **Trees** lie in the hierarchy between graphs and lists. They follow the rule that all nodes have one predecessor and a specified amount of successors. In a *binary tree* each node has two children (and one parent), and in a *quadtree* each node can have up to four children (but still only one parent). The relationship between the data structures is as follows:

(general) --> Graphs --> Trees --> Lists --> Arrays --> (specific)

What this means is that everything you could represent with a graph, you could also represent with an array provided that you can arrange the elements in a more specific way (with a fixed rank/index for each element, so only one parent/child for each node).

So when asked in an interview, "what is the best data structure?", you now know the answer: it is all about **context**. The data to be modeled and the common operations to be performed will help you determine which structure to use, for it is different in each situation.

We began class with some interesting examples of recursion in nature and music. Coastlines, trees, sea shells, rocks, cells, and basically everything in nature has some kind of mathematical link to recursion. Check out the picture below of romanesco broccoli (click for full size and more images). This remarkable food demonstrates fractal patterns, which are repeating patterns that can be achieved with recursive algorithms.

In addition to natural patters of recursion, these repetitions have been discovered in human artforms such as music. For example, the progression of musical notes on a given interval demonstrates a pattern that can be seen on a larger scale over a larger interval of the same piece of music. The same is true of trees; if you examine a branch, you will see that it resembles the structure of the entire tree. Check out this interactive fractal tree applet (or click the image above) to learn more about the fractal tree, which is a mathematically generated tree structures with a striking resemblance to nature.

Now for an introduction to the next lab. We will be programming an example of "computer vision", which is a field of computer science related to artificial intelligence. Through computer vision, computers are able to interpret visual data in ways that mimic human visual processes. Robots and other programs use this technology to interpret input from their surroundings, or from the user. Dr. Lakaemper's current research is in the field of computer vision, and his page on the Shape Similarity Project shows interactive examples of shape abstraction (basically what you are performing in the lab).

For our lab, will be given a collection of points that represent a 2D shape. Our task is to simplify the shape so that only the "important" visual data is preserved (the general outline of the shape), while removing anything that is visually unimportant (small details and nuances are left out). Therefore, we can take a rigid set of points and transform them into a simpler set of points that is easier for the computer to use for various interpretational algorithms. In order to simplify the shape, we will use the algorithm developed by Dr. Lakaemper and Dr. Latecki, called "Discrete Curve Evolution". It is described in detail on the lab 4 page.

So, after looking at the assignment for lab 4, it is apparent that our algorithm mainly requires two operations: traversal and deletion. Based on what we learned on Tuesday, which structure would perform these operations the best? Traversal would be O(n) for both an array and a list. When we traverse the data, we are finding the location of the point to delete; when comparing runtime for the deletion operation, note that we should already have a *pointer to the deletion location*. Therefore, we can see that a linked list is the clear choice, for deletion has a runtime of O(1) as opposed to the array with O(n). In a linked list, you only have to modify the two surrounding nodes when a deletion occurs, but with an array you would have to shift the entire remainder of the list every time a deletion is made.

.

First we took a quiz dealing with big-Oh notation, and many of the questions asked you to give the runtime for different operations on arrays or lists. Always remember that an operation such as "removeLast() from a double linked list" has a concrete number of steps (let's say 3), but the magnitude is **not** O(3)—it's O(1) because we always write runtimes in relation to "n". If the runtime is constant, such as 3, then it never increases no matter how big "n" grows. So "O(1)" is just saying that it's constant runtime, unaffected by increases in "n". Whever we evaluate an algorithm, we really are just trying to gauge how well it would perform when "n" is extremely large. If "n" were ten million, 3 operations will seem like 1 operation as compared to an algorithm that was O(n) and needed ten million operations (such as removing the first element of an array with a size of ten million).

Today we were introduced to the wonderful Java interface called "List" (Ch 4, pg 195). This interface contains all the methods needed to maintain elements in a list or array, so you don't have to write them yourself! We learned how to write these methods from scratch because it's important that you understand how they work, and you will indeed need to know how to write add() and remove() methods for linked lists in the courses you will take after this (in C, Matlab, etc). But in Java at least you have (hopefully) cleared that hurdle and you should be comfortable with the underlying structure of lists.

So the List interface is implemented into many different classes (ArrayList, LinkedList, Stack, Vector, etc), but will they actually behave differently if they all share the same "List" methods? How do we know the underlying structure and methods that are at work when Java creates the LinkedList or ArrayList for us? Unless you want to find all of the Java source code and mull over that for a while, there's the option of reverse-engineering and using what we already know to determine how Java has implemented these structures. We can use our knowledge of runtimes on the *real *array and linked list structures and compare these to the runtimes that occur when the Java classes are used. That way, without looking at any source code, we can make an educated guess as to how Java has implemented these structures.

Below is the code from class where we used the System.currentTimeMillis() method as a stopwatch to time how long certain operations took. Rather than just declaring a LinkedList or an ArrayList, you should always use the "generics" convention (LinkedList**<String>**), which provides the syntax to typify the structures (otherwise they would be type object, and whatever elements you put onto the list would be upcast to an "object", thus requiring a downcast to the original type when it is removed from the collection and operated upon).

ArrayList and LinkedList test code

As you can see by running this code, there is indeed a difference in the ArrayList and LinkedList's performance. But how will we determine which type of linked list is implemented? You should copy the code in the above link and try different methods (adding to/removing from the beginning, middle, end) to see how the runtimes differ. Even though Java has implemented the linked list structure for you, you should not blindly trust that it is optimized to perform the way you expect it to. Certain algorithms that you write might require you to implement the structure differently than how Java has done. Remember that you are smarter than the computer so don't put too much faith in it, the most important thing is to think for yourself! When we stop thinking for ourselves is when the machines will rule us all... mwahaha!

Today we continued our discussion of the List interface and focused on our favorite implementor of this interface, the LinkedList. On Tuesday we analyzed the "add" and "remove" operations, so what about list traversal? Let's look at the API of LinkedList and see if we find a good method to use for traversing the list.

The method that appeared to be the best fit was get(), for when traversing we would want to "get" the value of each node, since there's not much use in traversing a list if you're not looking inside it somehow (trying to find a specific value, finding the smallest value, etc). So we modified our code from Tuesday and attempted to traverse the list with the get() method.

In our *for* loop, we wrote "get(i);" to get the value of each node in succession. Using the System.currentTimeMillis(), we graphed the runtime of the "get()" operation using different values of "n". To our surprise, the runtime of our this new *for* loop algorithm was NOT the linear relationship O(n), which would have graphed as a straight diagonal line that increases steadily (runtime directly proportional to "n") over time. Instead, the runtime seemed to increase drastically as we got to higher values, and our graph showed a curved line such as this quadratic graph. As you can see in such a graph, the relationship between the number of elements (x-axis) and the runtime (y-axis) is found by using an equation like y=x^{2} (quadratic), not y=x (linear). In our graph, the values of "n" (x-axis) were much higher (between 500,000-3,000000), and our runtimes (y-axis) ranged between 47 and 230 milliseconds. The relationship still resembles O(n^{2}) even though you don't get 230 when you square 3,000,000; but if we added some constant (0.00007), we could make the formula "runtime= constant*n^{2}", and it would fit our graph. Remember that constants are always simplified to "1" when we write the big-Oh notation because *constants are not affected by the size of "n" *and can therefore be ignored. So if the graph is described by f(n)=(0.00007)*n^{2}, this simplifies to f(n)=n^{2}, and our big-Oh relationship is O(n^{2}).

After graphing our results and determining the curve be O(n^{2}), it is apparent that this algorithm is NOT using a constant number of operations to jump from one element to the next, and "get()" must be starting at the beginning every time and traversing until it finds the "i"th value. Big-oh really depends on how many "n"-dependent operations are present in an algorithm—in this traversal algorithm, our *for* loop's number of operations (comparisons and incrementing done in the for-loop header, the initial call to "get") grows steadily as "n" grows, and ALSO the number of *internal operations* of "get()" grows steadily as "n" grows. So nesting the n-dependent "get" operation inside of an n-dependent *for* loop results in the runtime of O(n^{2}). You could argue that there won't actually be n^{2} operations performed in this algorithm, since "get()" doesn't traverse through all "n" nodes every time. But remember that Big-oh is an *approximation of the worst case* runtime. And the worst case of the "get()" operation is when it has to traverse through all "n" nodes to get to the last element. So our approximation of O(n^{2}) holds true.

But there must be another method to traverse through the list that is O(n)... what could it be? The solution to list traversal is the **Iterator** class. Iterator has methods that allow you to traverse through the list with an O(n) runtime. To implement an iterator for your linkedlist, do the following:

`LinkedList<Integer> ll = new LinkedList<Integer>();`

**Iterator<Integer> it = ll.iterator();** `//iterator() breaks the rule of "methods start with a verb", but it's basically getting an iterator for the list "ll"`

for (int i=0; i<n; i++)

{

data = **it.next()**; `//Iterator's method "next()" will start at the first element and move to the next element every time it's called`

` `

}

This *for* loop can be simplified to...

`LinkedList<Integer> ll = new LinkedList<Integer>();`

for ( **Iterator<Integer> it = ll.iterator(); it.hasnext();** ) `//it.hasnext() returns "true" or "false" if there is a next element. Note that there's no "increment" condition in the for loop header, and this is accomplished instead by the "it.next" call below `

{

data = **it.next()**; `//Iterator's method "next()" will start at the first element and move to the next element every time it's called`

` `

}

The above *for* loop will run in O(n) time because the iterator is like a temporary pointer to the current list element, and it only needs to shift by one element every time "next()" is called (constant amount of operations, not affected by n). There is actually a simpler way to write this iterative *for* loop, using the "for each" notation introduced in Java 5 (pg 228). This makes traversal easier because you don't even have to create an Iterator object, the "for each" loop does this internally.

`LinkedList<Integer> ll = new LinkedList<Integer>();`

`//We never have to create an iterator because the "for each" accomplishes the same thing`

**for (int data : ll)** `//"for each" loop syntax, as you can see, is a lot simpler than the regular for loop—note that the word "each" does NOT appear in the code `

{

System.out.println(data); `//each iteration of the "for each" loop is advancing the hidden iterator through the list and getting each element ("data") for you to use `

}

For more details about the "for each" loop, including a list of when *not *to use it, check out Fred Schwartz's Java notes: For-each loop.

The next topic of today's class is a new (actually very old—patented in 1957) data structure, called a **stack**. To visualize a stack, picture a pile (or stack) of plates. Actually, the "stack" would be the vertical bin that holds a pile of plates (the plates are the elements "on" the stack). This vertical bin only allows access to the top plate, for its sides prevents you from grabbing a plate from somewhere further down in the pile. So when you place a bunch of plates into this bin/"stack", you only have access to the last plate you put on the stack; it's the only one you can take off of the stack. Therefore, the behavior of the stack structure can be described as "last in first out"—abbreviated as LIFO.

When operating on a stack, there are three main operations: "pop", "push", and "peek".

- Pop - similar to removeFirst, pop is used to remove and retrieve the top element of the stack
- Push - similar to addFirst, push is used to add a new element to the top of the stack
- Peek - similar to getFirst, peek is used to retrieve the value of the top element of the stack without removing it

.

Our quiz today involved writing a "pop" and "push" method for a stack. It is important that you know the size limit of the stack because both methods need to perform a size check before popping or pushing. If want to perform a push, you must first check if the stack pointer is at a location equal to the stack size. If so, then there is no room on the stack to push onto! This would be similar to adding an element to position "6" of a 6-celled array; you would not be able to do this because the cells are addressed from 0 to 5, and there is no cell "6". In contrast, if you want to perform a pop, you must check if the stack pointer is at zero, signifying that there is nothing on the stack (the pointer will be at the *location of the empty cell to push into*, so when it's at zero that means cell zero is empty).

Now we will demonstrate how a stack can be used to evaluate functions that have a nested order of operations (such as: (16 + 5*4)/9 + 17*3 ). We can do this in an efficient way using the stack to store all of the input data and also store the intermediate nested results without having to use any other storage space. We can first represent this equation as a binary tree in order to see the hierarchy of the operations:

As you can see, this binary tree (called an "expression tree" because it represents a mathematical expression) requires you to start at the ends and perform the operations as you work your way back to the top, middle node—called the "root" node—to finally reach the result of this equation. Binary trees are similar to linked lists because they are made up of nodes and pointers. A binary tree is different because each node has a pointer to at most 2 children nodes, labeled "left" and "right" (in a *full *binary tree, each node *must* have 0 or 2 children. A *simple*, *rooted* binary tree can have 0, 1, or 2 children per node. See Wikipedia for an explanation of the different categories of binary trees). Binary trees demonstrate a recursive structure because each child node is also the root node of the sub-tree that is underneath it. The recursive nature of trees is evidenced in the definition from our book on page 400:

A set of nodes T is a binary tree if either of the following is true:

- T is empty

- If T is not empty, its root node has two subtrees, T_{L} and T_{R}, such that T_{L} and T_{R} are binary trees

Based on the recursive structure of this binary tree definition, can you see how a recursive method could be used to traverse a tree? We will go into more detail about traversing binary trees on Thursday.

Now back to our stack-oriented approach to solving the nested equation. We will use a method called "post-order" traversal (see Thursday) to re-write the data from the expression tree in an order that can be interpreted by the stack. In post-order traversal, you will write down the value of a node only after you have visited both of the children in the order left, right (and you will treat each child node in the same way). Therefore, the leftmost node is the one that gets written down first, and you will end up with this expression:

16 5 4 * + 9 / 17 3 * +

This notation is also known as "postfix notation", or "reverse-Polish notation" because it was developed by the Polish mathematician Jan Łukasiewicz in 1920. Now we can operate on this postfix expression using a stack and see the benefits of writing an equation in such a strange way. Starting with the first character (16), we will push the values onto the stack until we reach an operator (*, +, or /). When we reach an operator, we will pop two elements off the stack (popping is always from the top, so it will be the two most recent elements pushed on the stack) and perform the operation on these values. Then when will push the result of this operation onto the stack and continue with the original expression.

The above diagram represents the first 4 characters of the equation. The next character is "+" so 20 and 16 will be popped, and the result of the operation "16 + 20" will be pushed back onto the stack. Eventually you will have 4 and 51 on the stack, and the last character, "+", will be performed on those two elements, giving the result of the equation, 55.

Our class today began with a generalization about recursion as compared to an iterative (looping) method: it *always* uses more memory (because the stack gets filled with the return address and input variables every time a new function call is made) and therefore is *always *slower. BUT, recursion is easier to read (simpler format) and easier to code (less lines of code, less if/else conditions).

We will see that binary tree traversal is relatively simple to code using recursion, mostly because trees are indeed *recursive structures* (see Tuesday 10/14).

When designing the tree's data structure, it is important to keep the Node class small and simple. This will help speed up operations that require node creation and comparison. Therefore, the node class should not contain any methods, only fields:

Class Node {

float data; Node left;

int level; Node right;

}

The Tree class should be similar to the LinkedList class that we created, with a pointer to the first node (trees are identified by the "root" node) and the appropriate methods for maintaining the tree. Currently, we are only interested in a method that will assign a level to every node in the tree (the root node being at level "1", its children at level "2", etc.).

Class Tree {

Node root; //the only pointer to the tree from the outside

public void assignLevels(){

recAssign(root, 1); //We are wrapping this recursive method within a non-recursive method to make "assignLevels()" a simple call. Whenever you want to assign the levels, just call this method and then you don't have to think about what input arguments are needed since the programmer has already entered the initial arguments "root, 1"

}

}

So now we must write the recursive function, "recAssign(root,1)" (Note: the word "function" is more ambiguous than "method", and a function is not necessarily tied to a specific class. It is just an algorithm that produces a result. recAssign does *not* necessarily have to be defined in class Tree, but assignLevels *does* because it passes the "root" node to recAssign, and "root" is a field of the Tree class). So the first thing we do when writing a recursive function/method is to write the method header, and then the structure: "if (stop case), else (recursive case)".

public void recAssign(Node n, int h) //IN: n: node that we are assigning; h: level/height of current node

{

if... //some stop case

return... //should match the return type of the method

else... //recursive case

//if necessary, perform some operation on the node we're currently visiting

call recAssign again... //and pass it 1 or more subsets of the original problem

//call recAssign more times if there is more than 1 subset

}

To complete this method, we must figure out the stop case, and also how many subsets of the original problem there will be (this will define the number of recursive calls that are made each time). When solving any problem recursively, you can imagine that you are branching out from a starting location with individual paths that will keep branching out and covering all of the necessary data elements until the "end" is found. The "end" could be the smallest element (in the Snowflake lab, the recursion stops when the lines are small enough), or furthest element (in a tree, each recursive path stops when a childless "leaf" node is found since this path cannot go any further). These types of recursive problems require *more than one recursive call* (there were 4 calls to drawLineRec from within drawLineRec's recursive case for the Snowflake lab) because you want to spawn enough recursive paths to cover each subset of the input data set. However, many recursive problems (such as finding the end of a single-linked list—see FindEnd) only require *one* recursive call since only a single subset of the data is needed each time. The "structure" of your data will help you determine how many subsets of a problem can be created. A single-linked list only has one pointer from each node, so when dealing with a node it would be hard to devise a subset using anything but that single "next" pointer. Therefore, the FindEnd method gets the input "Node n", which represents "the list starting at node n", and recursively calls the method with the subset "n.next" because this represents "the list starting at n.next"—one less element, so eventually it will get to the "end". In contrast, the Snowflake's recursive method had a single straight line as its input data, but we want to create 4 sub-lines out of the input line, so there were 4 recursive calls to receive each of these 4 sub-lines. *The job of the "recursive case" is to possibly operate on the current input data, and then create the subset(s) to pass as input arguments to the recursive call(s).* In FindEnd, hardly any work was needed to pass the subset "n.next" to the method call since you just needed to write "n.next" to "create" the subset. But in the Snowflake lab, you had to do a bunch of math (in computePoints) and create 4 new arrays to represent the new subsets that were passed to each of the 4 recursive calls.

In our binary tree method, how many subsets should be created from the input "root" node? After what we learned on Tuesday about the binary tree's structure, it should be clear that there are 2 subsets of a root node, because each node must have a "left" and "right" child node, which will be either the root of the respective subtree, or an empty tree (null node). So with the input of recAssign being the root node of a tree, we can take the left and right nodes and pass them recursively since they must represent the root node of a subtree (which may be "null" representing an empty tree). Should we pass an empty, null node as the input to the recAssign function? Are you worried that it will try to find the left and right nodes of this null node and then encounter a null pointer exception? Well, this is the reason that there's a "stop" case in every recursive function. Let's make the "if" case check if the node is null. If it's null, you won't ever reach the recursive "else" case, so nothing will be done except calling return. You should follow the method's "void" return value and return nothing; the point of this method is not to return anything, but to set the "level" field of each node to the correct integer value. The stop case just lets you know when you can't travel down a certain recursive path any further.

public void recAssign(Node n, int h) //IN: n: node that we are assigning; h: level/height of current node

{

if (n == null) //if the subtree is empty

return; //return, since you are at the end of a recursive path

else { //recursive case

n.level = h; //set the height of current node

recAssign(n.left, h+1); //pass recAssign the left subtree and the new value to use for this level

recAssign(n.right, h+1); //pass recAssign the right subtree and the new value

}

}

As you can hopefully see, this recursive method will visit each of the nodes and set them all equal to the correct level. In this method, which node will have its value set first? If you look at the recursive case, first the level of the current node gets assigned, and then the children nodes are visited. This method of operating on a node before you visit its children is called "pre-order traversal" because the node is "visited" (or operated on) before the children. Which node will be operated on last? Looking at the order of the recursive calls, recAssign for the left subtree is called first. The next line, "recAssign(n.right, h+1)", will not be called until the left-hand first recursive call completes, which only happens when all the leaves (nodes with null children) in the left subtree have been reached. Recursion keeps continuing forward until a return is reached, signifying that a path has ended and you can finally operate on the right subtree. Look at the diagram on pg 407 to visualize how the algorithm moves along the tree. The major difference in the 3 traversal methods (pg 406) is where the "visit" or "operate on" step is located in relation to the calls to traverse the left and right subtrees. In pre-order traversal, the last node to be visited is the highest-level (furthest from root) node on the rightmost path. Check out this traversal example on Wikipedia to see the differences in the three traversal methods: pre-order, in-order, and post-order. You should keep in mind that post-order traversal is used to produce the "postfix" expression that was discussed on Tuesday 10/14. Prefix and infix notation are produced when an expression tree is visited with pre-order and in-order traversal respectively.

.

Midterm Review day.

Midterm today.

.

Today we started with a lesson in German geography. How many countries border Germany? Here is a map showing the answer, which is 9 countries. This is more than any other country in Europe according to Wikipedia. So looking at the map, we started to discuss "topology", which refers to the connectivity between elements. The neighboring countries of Germany represent a topological relationship, since we are not concerned with the sizes/distances of the bordering countries (geometrical relationship), just the fact that they are connected to Germany. When talking about the structure of trees or lists, we must focus on the topology—or the connections—between nodes in memory, because the concepts of distance are unimportant when we make referential links. A link in a tree has no distance assigned to it (you will later learn about other data structures that do, such as weighted graphs). We should know the topology of a binary tree: every node has a max of one parent node, and a max of two subtrees (children nodes—remember that a subtree could be "empty").

So what is the purpose of a binary search tree (BST)? After we determine the runtime of searching for a specific element, it should be clear that the purpose of binary search trees is searching (if the name didn't give it away).

If you are searching for an element in a *list*, the runtime is O(n) because the worst case is that the element is at the end of the list and you must traverse through all "n" elements. When searching for an element in a BST, the runtime will be equal to the height of the tree, because the furthest you will have to search is through one entire branch of the tree (more organization of nodes = shorter time to find a single node). A property of a tree that affects its search efficiency is how "balanced" or "full" it is, since this is directly related to the height of the tree. An completely unbalanced tree would resemble a list, for it would be a straight path of single-children nodes (basically a single-linked list) and have a height of "n" since all nodes are in a single, straight branch. In contrast, a "full" tree is the most balanced form that a tree can take: all nodes must have two children, except for the leaves, which have no children and are all at the same bottom level of the tree (pg 403). The height of a full tree can be found with "log_{2}n" (result = what power you raise 2 to in order to get n), since the height of a tree represents "how many times you can divide the number of elements by two". The runtime of an unbalanced tree would still be O(n) (same as single-linked list) since the height of the tree is n (worst case). But when we create BSTs, the tree will almost always take a more balanced form due to chance, or due to a self-balancing algorithm that keeps the tree at a minimal number of levels during insertions. Therefore, the generally accepted runtime for searching in a BST is O(log_{2}n), representing a more full tree.

You can see based on the graph below that a runtime of O(log_{2}n) is better than O(n) because it has a smaller runtime (lower y-axis value) as you go to the right (larger x-axis value = more data elements).

Note that this graph is drawn on a scale used to represent the relationships among the functions, and the lines don't exactly represent the slopes of each function.

**BST insertion and deletion** can be found on pages 422-8.

How would we delete a node from a BST? After much consideration of different options, we settled upon the method that required the least amount of steps. When we remove a node, there are three possible cases and actions we might take:

1) The node has no children, so we nullify the pointer that pointed to it ("node.parent.L/RChild = null", or "the parent of our node has a child pointer that is pointing to our node. Instead, set it to null")

2) The node has a single child (left or right), so we simply redirect the parent pointer ("node.parent.L/RChild = node.L/RChild", or "the child pointer coming out of node's parent was pointing to node, but instead point it to node's child, thus bypassing node and removing it from the tree")

3) The node has two children, so we must replace it with a node in one of the subtrees that still has the property of being an "intermediary" to the other nodes in the subtrees. This case requires a graphical example for explanation. In the picture below, pretend that we are trying to delete the root node, 8. To maintain the balanced structure of the tree, and to keep the number of re-linking operations to a minimum, we want to select one of the other nodes and have it take the place of the removed node. You should be able to see that there are only two nodes that can replace the root node. What is the property that they have?

The only two nodes that can replace the 8 are the nodes 7 and 9. These nodes have the property of being an intermediary node to all the other nodes in the tree (once the 8 is removed), so either one of them can fit in the 8's position. We find these nodes by either taking the rightmost node of the left subtree, or the leftmost node of the right subtree. Keep in mind that these will not always be leaf nodes. If the 7 had a left child, it would still be the rightmost node of the left subtree. Then when you removed the 7 to use in place of the 8, you would follow case #2 above and have 7's parent point to this orphaned left child.

The next topic of this week was the **heap data structure**. What is a heap? Heaps are trees that have less organization of their internal values, but more organization of their structure. The relationship among the nodes is that the value of a parent must be less than the values of each child (for a min-heap; in a max-heap, the parent has a greater value. In this course you can always assume min-heap unless otherwise stated). We can see how there is less organization of the internal values because there is no left-right relationship between the children. All that matters is that the parent-child relationship is kept.

The most important structural difference of heaps is that they must always be COMPLETE. Completeness is defined on page 404, and it means that the tree is "full" except possibly the right-side of the bottom row isn't filled entirely. The picture below explains completeness, for there can only be empty spaces on the bottom level starting from the right.

A tree is not a heap unless it is complete and the parent-child relationship is met. Both of the above trees meet these requirements. Notice that in the left heap, 6 is at level four, yet it is smaller than the 7 at level two. This is entirely legal because the 6 only depends on the 5 and the 2. There is no left-right relationship, so you must look at a single branch to determine if the relationships hold.

As you can see, this single-branch relationship model would never aid in searching for a value, since there is no way to tell which direction to take from a parent node—each child has the exact same relationship to the parent. The reason that this structure is useful is because the root element always has a unique property in the heap. The root will always be the minimum element of the data set, which is useful in *sorting*. If we are sorting in ascending order, we can start by removing the minimum element from a data set and placing it first in the sorted list, and then repeat this process until all the data elements have been placed into the list.

We can use a heap to sort a collection of data by continually removing the root node. The root node is the only node that will ever be removed from a heap since it's the only position that has a defined relationship to all of the other nodes. Therefore, after you remove the root node from the heap, you must find the new minimum value to serve as the new root node. Looking at the above example, which node would replace the removed root node? For all heaps, the next minimum node after the root node will be one of the nodes on level two. You have no way of knowing if it will be the right or left node, so you must compare them and use the smaller of the two to replace the root. However, doing so would result in an incomplete tree (empty spaces within the tree), which means it's not a heap! Deletion therefore is a little more complex, because the FIRST thing you should think about is keeping the tree COMPLETE, and then you can worry about maintaining the relationship of the values.

Deletion from a heap:

1) Copy the value from the root node to the end of your sorted list

2) Delete the rightmost node on the bottom level of the head (to keep it complete) and insert this value into the root

3) This new root value will not be in the correct position (unless it's the only node in the heap), so you will need to move it

4) The lesser value of two children is the only one who can be promoted to parent, so swap the lesser child with this new root.

5) Check if the value is still not in the right place, and continue to swap with the lesser child until you have a heap again.

Note that the only structural move takes place at step #2, and after that you will only be doing swapping of values so that the heap's structure remains complete the entire time. When you are building a heap, the same rules are followed. First the new node is added to the rightmost position of the bottom level to maintain completeness. Then the values are compared, and if necessary, the new value will be swapped with its parent until it reaches an appropriate location in the heap (where its parent is less than it). Insertion and deletion can be found on pages 435-6, and the runtime of both of these methods is O(log_{2}n) since you might have to move a value up or down the entire height of the heap for each insertion or deletion. Like the BST, the height of a heap is represented by log_{2}n.

.

What's the most important property of a heap? It is COMPLETE. If you've been in class and don't know that by now, you must have been sleeping. But knowing this answer won't help you unless understand how completeness relates to operating on a heap.

Every time you operate on a heap, the first thing you do is make a structural move that will keep the heap **complete**. If you are inserting a new element, you don't care about its value until it's already inserted into the heap (at the bottom row, to the right of the rightmost element). Only after it has been added to this position do you compare this new value to its parent node, and swap if necessary (bubble up the new element until it is in a position where the parent is lesser and the child is greater).

Rules for **insertion**:

-without any regard to the value, you ALWAYS put the new element at the "last" position to keep the heap *complete*

-then you can compare values and "bubble up"
the new value by swapping with the parent until it's in the correct location

-do NOT analyze your values before you begin to build the heap, simply take them in the order they're given and follow the two steps above

Rules for **deletion**:

-you will ALWAYS take/delete the value from the *root node*

-delete the element in the "last" position, to keep the heap *complete*, and put this value into the root node

-now you will look at the values and "trickle down"
this new root by swapping it with the lesser child until it's in the correct location

Question: how do you know which node in a heap to delete next?

Well, we can look at a picture of a heap and point to the bottom-level, rightmost node and say it’s the “last” node. But how do we translate this to the internal representation of the heap in memory? There is no simple algorithm to determine the bottom-level rightmost node, since you can't just go right-right-right from the root (the rightmost node is not always on the bottom row, so not always the “last” node!). To search for this node we would have to traverse every single branch starting from the left until we found one that didn't reach the lowest level (the previous branch would have the "last" element then). This would take a LONG time. So there must be another way to keep track of the "last" node. What about a pointer to it from the outside? That might be useful, but then when we delete a node, moving this "last" pointer isn't as simple as "moving to the left", since we're dealing with a tree structure. And if we had just removed the leftmost node in a level, the algorithm would have to know that the next "last" node is the rightmost on the previous level.

Solution: represent the heap as an array!

We will assign an index to each node in the same way that we check for "completeness"—by going across (left-to-right) and then down to the next level, and across again. By doing this, the "last" element of the heap will ALWAYS be the node with the highest index. And since an array is linear, we can have a "last" pointer that only has to "move to the left one space", or point to the next-highest index every time a value is deleted. So since "last" represents an array index (integer), this pointer should be of type integer (all variables are references/pointers), and its value would decrement by 1 every time a node was deleted. The image below shows a heap that has been represented as an array. The "last" pointer would now equal 10, since that's the location of the next value to move into the root once a deletion is made (always remove the value from the root and then move the last value here). After a deletion, 3 would no longer be in the array, and there would be nothing in the 10th position. The value 19 will have moved to the root's position but then it will trickle down (by swapping with the lesser child every time) until it's in the position of the 12.

Examine the relationship between each node and its children. What array index do the children have in relation to the parent? We determined that the children of a node are found with the simple formula: n*2+1, and n*2+2 (this is only true when you start your indexing at ZERO—the diagram above starts at 1 so it would be n*2, and n*2+1).

So we have already established (last week) that BSTs are used for searching, and heaps are used for sorting. What about a data structure that's good for retrieval (finding a specific value in an optimal runtime)? It's true that we can use a BST to find a specific value and retrieve it, but the runtime for finding a value in the BST is O(log_{2}n). We are told there is a structure that does not have as much internal organization as the BST, and also not as much overhead (nodes and child pointers), AND it can perform a retrieval in a runtime that APPROACHES O(1). Note that it is **NOT GUARANTEED to be O(1)**, we just know that it's probably less than O(n), and **hopefully close to O(1)**—which is achieved when there are *minimal collisions*. If it was O(log_{2}n), then a BST would be faster for retrieval.

This structure is the hash table, or hash map. A hash table can be represented as an array, and each data element will be "mapped" to a certain index in the array. We need to establish a "rule" to determine which index each data element should have, and this will be based on the *value of the element*. An example would be the colors in a box of crayons. Assuming there's a lot of different color names, and that they mostly all start with different letters of the alphabet, you could arrange them in a hash table with the following rule: the first letter of the color name gets translated into a number (A=1, Z=26). Therefore, you could retrieve a color by using this "rule", so if you wanted to find Magenta, you would go straight to index #13. This appears to be a single step, so why isn't the runtime always O(1) to retrieve an element? Well, what if we also had the colors Mauve, Maroon, and Midnight Blue? Your hash table might have more than one value mapped to the same location, and that will cause a "collision". We will learn how to deal with collisions next week.

The "rule" that we use to map a value to the hash table is known as the "hash code" or "hash function". If we are trying to map the set of all positive integers to an array of size 10, we must use a hash code similar to this: H(d) = d%10. "d" represents the element we need the hash code for. So if you want to get the hash value 101, or H(101), you must solve the equation 101%10.

% = "modulo", or the remainder operator. In integer division, the remainder is never kept, so the result of 101/10 would be 10. Modulo is used to get that remainder, so 101%10 = 1, 100%10 = 0, and 10%10 = 0. Also, 9%10 = 9 and 98%100=8, etc.

The modulo operator is exceedingly useful for hashing, because if you use d%10 for any positive integer, your result MUST be between 0 and 9. You can never have a remiander of 10 if you are dividing by 10. This hash code works perfectly with our array of size 10 because its indices are the values 0-9.

.

Topics covered in this log:

**Hash codes**- know that**%**n, where n is the size of the hashtable, is an implied step after using the hash function**Retrieval from a hash table**- just follow the same steps you used to insert: generate hash code, if value is not in this spot use linear probing (there was a collision during insertion)**Linear probing**vs.**quadratic probing**- what's the difference, and which one is better?**Open addressing**(array) vs.**Chaining**(array of linked lists) - how are collisions dealt with in each case?**Game trees**- representing possible moves in a tree**Min-max strategy**- starting with a row of values (bottom row of a tree=leaves) build a tree above it with the parent inheriting the min value, and then its parent the max value, etc.- Know how to analyze the game tree once all the rows are created.
**Alpha-Beta pruning will be on the quiz**, and I'll attempt to explain it in these logs.

This week we continued to discuss **hash codes**. Remember that after you use a hash function to generate the hash code, you still have to perform the **modulo** operation (%n) on this value to ensure that it will be in the bounds of our hash table. The modulo operation is never included in the hash function itself because the size of the hash table, n, is the user's choice. Hash table size will vary based on how much memory you want to take up, and how many data elements you want to insert, but a specific hash function is usually predetermined based on the data set.

We know that the reason for using a hash table for retrieval as opposed to a BST is that the best case performance of hash tables, O(1), is *better* (less time) than that of the BST, O(log_{2}n). However, the *worst* case for hash tables is O(n), which is worse than the O(log_{2}n)—we will now discuss why there is such a large range for the hashtable's performance. **Retrieval** is accomplished by generating the hash code for the data element you are trying to retrieve, and then looking into the cell with this index (just like during an insert). But then we must check if the element is actually here or not. If there was a collision during the insert of this item, there would be a different element in this cell, and the one we're looking for would be in one of the next cells (if it's even in the hash table at all). Keep in mind that the element you are searching for might not even be in the hash table, so if you encounter an empty space before you find the element, you must return "null". Therefore, it is good to have empty cells interspersed throughout your occupied cells so that a retrieval will not take too long (*clustering* of data implies a large group of full cells, which is unfavorable). So getting back to the runtimes of O(1) vs. O(n) for the hash table—if all n data elements had been mapped to the same index, the worst case for retrieving one of them would be O(n) because you would have to search through all n elements until you found the correct one. If there were no collisions at all, then each element would be exactly where its hash code maps to, meaning all retrievals would be O(1). Therefore, you should always strive for a hash function that will *spread out the data elements with as few collisions as possible*, as well as a collision method that will reduce clustering.

There are different ways to deal with collisions when inserting data (always use the same collision method when retrieving the element later). The most straightforward method is to use **linear probing**, which means that after a collision you keep adding 1 to the index until you find a new index that has not been filled already. This method will lead to clustering of data if there are many elements with the same hash code. To reduce clustering, you may alternately use **quadratic probing**, which is where you visit the positions after the index that have the values i+1, i+4, i+9, i+16, etc, as opposed to linear probing, i+1, i+2, i+3, etc. By using quadratic probing, we hope to space out the elements better (more open cells), so that each retrieval step won't take as long. These two types of collision resolution can be applied to a hash table that is represented as an array, which is known as **open addressing** (we couldn't exactly determine why it's called that, but just remember that it means using an array and probing after a collision). The alternative to open addressing would be **chaining**, where the hash table is represented as an array of linked lists. During a collision, instead of using probing to find an empty index to insert into, you add the element to the top of the linked list for the given index. Below is an image showing a chained hash table. Each array cell contains a pointer to its linked list.

In event of a collision, a new element would be inserted to the head of the list, because this requires O(1) steps every time. We could have a pointer to the end of each list and insert at the end in O(1) time, but this just wastes space by having these unnecessary pointers since we can achieve the same result by inserting at the head. Retrieval is still NOT GUARANTEED to take O(1) time, for you may be searching for an element that is at the end of a linked list.

On Thursday, we switched gears to **game programming**. We started off with a slide show titled the "History of Computer Games" which can be found the CIS 350 class website. This course is titled "Introduction to Game Programming"—it's a fun undergraduate course which may be offered next year, keep an eye out! There were some interesting games and systems in this slideshow, including the now-famous (in CIS2168) "Mission Elevator", which was authored by none other than Dr. Lakaemper!

Next we approached the problem of modeling the game of chess on the computer. There would be an array ([8][8], or just [64]) storing all the spaces on the game board, and a representation of each piece (possibly a class for each type) for both teams. The GUI modeling aside, the question becomes: how do we get the computer to play competitively and "think" like a person? Well we're not going to exactly have the computer think like a person. The computer can perform trillions of operations per second, meaning that it can use logic, probability, and countless "test" cases to analyze the situation and take the best probable move. **Game trees** are used for this purpose. Assume that each game board configuration has a computed "value" symbolizing how "good" it is for a player. Let's say that player 1 is represented by maximum values (larger = better for him), and player 2 is represented by minimum values (smaller = better for him). Examine the game tree below. To create it, the computer calculated each possible move from the current position (1st move), each possible 2nd move from all of the possible 1st moves, etc, up to all the possible 4th moves. This tree could have a HUGE span, for each move might have 35 possible other moves spawning from each move, meaning there would be over 1.5 million possible 4rd moves. This is a lot, but a modern computer wouldn't have much trouble dealing with it. But for the sake of simplicity in this class, we will assume 2 possible moves from each move. Now we'll pretend that player 1, or "Max" (wants a higher value), is the computer. He (the computer) calculated to up to 4 possible moves and stored the value for each of these 8 possible configurations (leaf nodes). Using the "min-max" strategy, we assign the levels alternating between "min" and "max", and once we predict each person's move, the root node will represent the move that Max will choose to have the best chance for success. So starting with the leaves, we will choose the parent node based on whoever represents that row. So the 3rd row belongs to Max, and if there are two possible moves spawning from his row, we know that he would choose the move with the maximum value. This is how we fill in the tree from the BOTTOM-UP. The tree on the right has been filled using this strategy.

The point of this tree is to find a move that is good for Max 4 moves from now. Out of all the possible 4th moves, 15 would be the best. But Min would never choose to make a move of value 15 when he could also make a move of 7, because 7 is better for him. But at least Max knows that his move is pretty good considering the fact that Min will always pick the move with the least value out of the possibilities.

Because this tree can grow to be so huge, there is a technique called **Alpha-Beta pruning** that will help eliminate unnecessary computations by removing subtrees that would never be chosen. Pretend that the -1 was actually a 7 (or any number greater than 6). When moving from left-to-right, we see that Max picked 6 in level 3. Then to fill the space to the right of the 6 (the 3 in 3rd row), Max looks below at the left child first (pretend -1 is 7). So before looking at the next node (the bottom-row 3), Max knows that is already has a 7 contending for this parent spot, so it would have to pick the 7 unless there is something greater than a 7 to the right of it. So this parent spot must have a 7 or greater. But the Min row above this Max row is looking to pick the minimum value out of the two possible children. There's a 6 on the left, and "7 or greater" on the right, so we can STOP right there for we know that Min will pick the 6. Therefore, we don't have to compute any other children of that 3 in the 3rd row. So we can eliminate everything to the right of the 7 (-1 in the picture), and even though this is only one node in our picture, in reality there would be many more options spawning out of each node, so it would actually save you a lot of time. Check out the slide set titled "8. Classic AI: Search Trees" on the 350 class website for more info about the Game Tree.

.

Topics covered in this log:

- Why is knowing the number of comparisons and swaps in a sorting algorithm important (and which of those two operations is more expensive/has more lines of code)?
- What is the runtime of heapsort (sorting using a heap)?
- What is the runtime of runtime of most sorts (bubble sort, insertion sort) that have two loops?
- What is the runtime of merge sort and why?
- What is the point of each
*for*loop in bubble sort?

We started this week's class with a guest professor, Dr. LaFollette. He asked us which sorting algorithms we already know, and the answers included *heapsort, bubble sort, insertion sort*, and *selection sort*. The only sort that we actually learned in this class is heapsort, which is performed by constructing a heap (inserting a set of numbers), and then destroying the heap (deleting the root node repeatedly until the heap is empty). As we already learned at the end of week 9, insertion and deletion for a heap both have runtime O(log_{2}n). So constructing a heap (inserting n elements) has a runtime of n * O(log_{2}n) = O(log_{2}n). Destroying a heap has the same runtime, so the combined construction and destruction of a heap (resulting in a sorted list!) would have a runtime of O(log_{2}n) + O(log_{2}n) = O(2log_{2}n) --> remove the constant "2" --> **O(log _{2}n)** for heapsort.

Heapsort is different than the other sorting algorithms mentioned (bubble, insertion, selection) because in heapsort the elements are structured in a unique tree structure (a heap). Those other 3 sorting algorithms will keep the data in an array, making comparisons and swapping elements when necessary. You should check out this sorting applet that simulates 5 of the different sorts (I get a blinking warning sign when I run this applet using Java 6. If anyone can figure out how to determine the what the warning actually is, let me know). By looking at the animation in the applet and the pseudocode below, try to determine what is actually going on in each sort. Then, see if you can determine the big-Oh performance for each of these algorithms. Note that in the pseudocode below, ">=" means "greater than or equal to".

As you can see, these algorithms involve loops. Each *for* loop depends on n, and the *while* loop in insertion sort also depends on n (even though it has a second stop condition "A[j] > value"). Since each sort has two (nested) loops that depend on n, the runtime is O(n^{2}) for each of them. Note that the runtime wouldn't be O(n^{2}) if the loops were not nested—if they were sequential, then it would only be O(n) + O(n) = O(2n) --> O(n).

The applet keeps track of the number of comparisons and copies (swaps) that have occurred during each sort. These two operations are key to determining which sort you want to use. We noted that swapping is a more "expensive" operation than a comparison, because it involves three lines of code rather than one, and more data is being moved. Swap is represented by the following pseudocode:

swap(a, b):

temp = a

a = b

b = temp

In each sort, the number of comparisons is determined by the algorithm. However, the number of swaps is determined by the data. For instance, if a list is almost sorted, then insertion sort would perform the quickest because it has the least number of comparisons (in the *while* loop header, it checks the condition: A[j] > value). So the *while* loop would almost never execute if the list was almost sorted because this condition would almost always be false. As seen in class, we could also modify bubble sort to be more efficient for an almost-sorted list. If there have been no swaps after one iteration of the outer *for* loop, we can stop the algorithm because we know the list is sorted. Inserting a boolean variable to act as a "flag", we could set it to "false" before the inner *for* loop, and set it to "true" only if there was a swap. Then after the inner *for* loop, you could check if the flag is still false, and if so, break from the outer *for* loop with the "break;" statement, thus reaching the end of the algorithm. Note that the loop indices in the bubble sort example above are reversed from the example we had in class, but it doesn't affect the result. The outer loop is still used to control the scope of the comparisons, and the inner loop iterates through each comparison within the determined scope or range. Any elements outside the current scope have already been placed into their correct positions.

Merge sort differs from the sorts we have already looked at because it is a recursive algorithm. The animation in the applet shows the steps performed after the function has reached the final recursive level, where the lists are of the smallest size. The pseudocode below shows all of the recursive steps involved. Note that in the pseudocode, "<=" means "less than or equal to", and "var list: left" means "create a new variable of type list, named left". The merge sort algorithm involves the usage of a "helper" function that acts to merge two sorted lists together, since the point of merge sort is to break down the list into smaller lists and then combine them in the correct sorted way.

So what is the performance of merge sort? Well we must investigate how many steps are involved. We need to look for loops to help us determine runtime, and we should be able to spot 3 loops: the two *for* loops that split a list into left and right halves, and the *while* loop that merges two lists together. Because the splitting requires almost no logic, we will disregard these two loops. Then we are left with one *while* loop that depends on the length of a list (which will range from 2 to n depending on which recursive step we're at), so we will assign the worst case "n" to this loop. So where is another loop in this algorithm? It may not seem like a loop at first, but recursion is a form of looping. It requires that the same procedures are repeated a certain amount of times. So how many recursions will this algorithm need to complete? We must look at the stopcase to determine that. The algorithm will only stop when the length of an input list is 1. So how many times must you split a list of length "n" in half until each piece is size 1? That can easily be represented by a binary tree, where each child node is half of its parent.

After doing some calculations, we determined that there are log(n) recursions. So when we multiply log(n) with the *while* loop's runtime n, we get O(nlogn) for merge sort. It's ok if the calculations involved don't make total sense to you, because you will learn how to do algorithm evlauations such as this in CIS 3223. So for now, know that the runtime is O(nlogn) and that the recursive nature of the algorithm is directly related to this fact.

.

Tuesday we introduced the group project and final lab assignment, lab 8.

Thursday was Thanksgiving so we had no class.

.

This was our last full week of class. We covered AVL trees, which are self-balancing trees. The structure is named after the Russian mathematicians Adelson-Velsky and Landis, who wrote the paper introducing this structure in 1962.

Binary search trees are said to be "balanced" if the height of the left subtree differs from the height of the right subtree by a maximum of 1. We calculate the "balance factor" or value of balance with the formula: height(right subtree) - height(left subtree), so a "balanced" tree could have a balance factor of -1, 0, or 1.

When we balance a tree, we are choosing a new root so that the levels on each side are shifted. If a tree is left-heavy, then we would want to use one of the nodes on the left side as the new root. Study the diagram below (from Wikipedia) to see how the new root, or "pivot", is found. It is called a "pivot" because it is the node you are rotating around. As you can see, there are 4 initial cases. The name "Left Left" means that entire tree is left-heavy, and the left subtree is also left-heavy (or possibly balanced). The reason the "Left Right" and "Right Left" trees have an extra step is that the tree would still be unbalanced, just in the opposite direction, if the normal pivot procedure was applied to them. Therefore we must transform them into a LL or RR tree before doing the normal pivot procedure.

Take note of the relocation of the "B" subtree in the LL and RR cases, and the "D" subtree in the LR and RL cases.

The top row is the initial tree, and the bottom row shows the final result (intermediate step is showed for LR and RL trees).

AVL trees are constructed by constantly checking the "balance factor" after each insert into the tree, and performing a balancing operation if necessary. This ensures that the tree is always balanced, which will prove useful in keeping the search performance at a runtime of O(logn). Tree height always determines the search performance, for the height represents the maximum amount of nodes you would have to search through.

.

Today was the last day of class, and our review for the final.

Topics that will be covered on the final exam:

1) Inheritance

2)
Up/Down-casting

3) Static vs. non-static (class fields vs. instance variables)

4) Interfaces

5) Lists (single/double, circular)

6) Big O

7) Stacks

8) Binary Trees

9) Expression Trees

10) Reverse Polish Notation (postfix, prefix)

11) Tree Traversal (postorder, preorder, inorder)

12) Binary Search Trees

13) Heaps

14) Hashing

15) Sorting

16) Recursion

The exam is open notes and open book. However, you may not use your computer (since we have no way to restrict online communication).

Unless you are informed otherwise, the exam will be held in the same room as our class: Tuttleman 01B.

.