Thursday, April 26, 2012

Testing the Scorer a Clojurebreaker Game


In the previous section, we developed the score function iteratively at the REPL and saw it work correctly with a few example inputs. It doesn’t take much commitment to quality to want to do more validation than that! Let’s begin by teasing apart some of the things that people mean when they say “testing.”
Testing includes the following:
·        Thinking through whether the code is correct
·        Stepping through the code in a development environment where you can
see everything that is happening
·        Crafting inputs to cover the various code paths
·        Crafting outputs to match the crafted inputs
·        Running the code with various inputs
·        Validating the results for correctness
·        Automating the validation of results
·        Organizing tests so that they can be automatically run for regression purposes in the future
This is hardly an exhaustive list, but it suffices to make the point of this section. In short, testing is often complex, but it can be simple. Traditional unit-testing approaches complect many of the testing tasks listed earlier. For example, input, output, execution, and validation tend to be woven together inside individual test methods. On the other hand, the minimal REPLtesting we did before simply isn’t enough. Can we get the benefit of some of the previous ideas of testing, without the complexity of unit testing? Let’s try.
 
Crafting Inputs
We have already seen the score function work for a few handcrafted inputs. How many inputs do we need to convince ourselves the function is correct? In a perfect world, we would just test all the inputs, but that is almost always computationally infeasible. But we are lucky in that the problem of scoring the game is essentially the same for variants with a different number of colors or a different number of pegs. Given that, we actually can generate all possible inputs for a small version of the game.
The branch of math that deals with the different ways of forming patterns is called enumerative combinatorics. It turns out that the Clojure library math.combinatorics has the functions we need to generate all possible inputs.
Add the following form under the :dependencies key in your project.clj file, if it isnot already present:

[org.clojure/math.combinatorics "0.0.1"]
The selections function takes two arguments (a collection and a size), and it returns every structure of that size made up of elements from that collection.
Try it for a tiny version of Clojurebreaker with only three colors and two columns:

(require '[clojure.math.combinatorics :as comb])
(comb/selections [:r :g :b] 2)
-> ((:r :r) (:r :g) (:r :b)
(:g :r) (:g :g) (:g :b)
(:b :r) (:b :g) (:b :b))
So, selections can give us a possible secret or a possible guess. What about generating inputs to the score function? Well, that is just selecting two selections from the selections:
 
(-> (comb/selections [:r :g :b] 2)
(comb/selections 2))
-> (81 pairs of game positions omitted for brevity)
Let’s put that into a named function:

clojurebreaker/src/clojurebreaker/game.clj
(defn generate-turn-inputs
"Generate all possible turn inputs for a clojurebreaker game
with colors and n columns"
[colors n]
(-> (comb/selections colors n)
(comb/selections 2)))
All right, inputs generated. We are going to skip thinking about outputs (for reasons that will become obvious in a moment) and turn our attention to running the scorer with our generated inputs.
 
Running a Test
We are going to write a function that takes a sequence of inputs and reports a sequence of inputs and the result of calling score. We don’t want to commit (yet) to how the results of this test run will be validated. Maybe a human will read it. Maybe a validator program will process the results. Either way, a good representation of each result might be a map with the keys secret, guess, and score.
All this function needs to do is call score and build the collection of responses:

clojurebreaker/src/clojurebreaker/game.clj
(
defn score-inputs
"Given a sequence of turn inputs, return a lazy sequence of
maps with :secret, :guess, and :score."
[inputs]
(map
(fn [[secret guess]]
{:secret (seq secret)
:guess (seq guess)
:score (score secret guess)})
inputs))
Try it at the REPL:

(->> (generate-turn-inputs [:r :g :b] 2)
(score-inputs))
-> ({:secret (:r :r), :guess (:r :r),
:score {:exact 2, :unordered 0}}
{:secret (:r :r), :guess (:r :g),
:score {:exact 1, :unordered 0}}
;; remainder omitted for brevity
If a human is going to be reading the test report, you might decide to format a text table instead, using score print-table. While we are at it, let’s generate a bigger game (four colors by four columns) and print the table to a file:

(use 'clojure.pprint)
(require '[clojure.java.io :as io])
(with-open [w (io/writer "scoring-table")]
(binding [*out* w]
(print-table (->> (generate-turn-inputs [:r :g :b :y] 4)
(score-inputs)))))
-> nil
If you look at the scoring-table file, you should see 65,536 different secret/guess combinations and their scores.
 
Validating Outputs
At this point, it is obvious why we skipped crafting the outputs. The program has done that for us. We just have to decide how much effort to spend validating them. Here are some approaches we might take:
·        Have a human code-breaker expert read the entire output table for a small variant of the game. This has the advantage of being exhaustive but might miss logic errors that show up only in a larger game.
·        Pick a selection of results at random from a larger game and have a human expert verify that.
 
Because the validation step is separated from generating inputs and running the program, we can design and write the various steps independently, possibly at separate times. Moreover, the validator knows nothing about how the inputs were generated. With unit tests, the inputs and outputs come from the same programmer’s brain at about the same time. If that programmer is systematically mistaken about something, the tests simply encode mistakes as truth. This is not possible when the outputs to validate are chosen exhaustively or randomly.
We will return to programmatic validation later, but first let’s turn to regression testing.
 
Regression Testing
How would you like to have a regression suite that is more thorough than the validation effort you have made? No problem.
·        Write a program whose results should not change.
·        Run the program once, saving the results to a (well-named!) file.
·        Run the program again every time the program changes, comparing with the saved file.

 If anything is different, the program is broken.
The nice thing about this regression approach is that it works even if you never did any validation of results. Of course, you should still do validation, because it will help you narrow down where a problem happened. (With no validation, the regression error might just be telling you that the old code was broken and the new code fixed it.)
How hard is it write a program that should produce exactly the same output?
Call only pure functions from the program, which is exactly what our scoreinputs function does.
Wiring this kind of regression test into a continuous integration build is not difficult. If you do it, think about contributing it to whatever testing framework you use.
Now we have partially answered the question, “How do I make sure that my code is correct?” In summary:
·        Build with small, composable pieces (most should be pure functions).
·        Test forms from the inside out at the REPL.
·        When writing test code, keep input generation, execution, and output validation as separate steps.
This last idea is so important that it deserves some library support. So, before we move on, we are going to introduce test.generative, a library that aspires to bring simplicity to testing.
 

Scoring a Clojurebreaker Game


As a Clojure programmer, one question you will often ask is, “Where do I need state to solve this problem?” Or, better yet, “How much of this problem can I solve without using any state?” With Clojurebreaker (and with many other games), the game logic itself is a pure function. It takes a secret and a guess and returns a score. Identifying this fact early gives us two related advantages:
·        The score function will be trivial to write and test in isolation.
·        We can comfortably proceed to implement score without even thinking about how the rest of the system will work.
Scoring itself divides into two parts: tallying the exact matches and tallying the matches that are out of order. Each of these parts can be its own function. Let’s start with the exact matches. To make things concrete, we will pick a representation for the pegs that facilitates trying things at the REPL: the four colors :r (red), :g (green), :b (blue), and :y (yellow). The function will return the count of exact matches, which we can turn into black pegs in a separate step later. Here is the shell of the function we think we need:

clojurebreaker/snippets.clj
(defn exact-matches
"Given two collections, return the number of positions where
the collections contain equal items."
[c1 c2])
Hold the phone—that doc string doesn’t say anything about games or colors or keywords. What is going on here? While some callers (e.g., the game) will eventually care about the representation of the game state, exact-matches doesn’t need to care. So, let’s keep it generic. A key component of responsible Clojure design is to think in data, rather than pouring object concrete at every opportunity.
When described as a generic function of data, exact-matches sounds like a function that might already exist. After searching through the relevant namespaces (clojure.core and clojure.data), we discover that the closest thing to exact-matches is clojure.data’s diff. diff recursively compares two data structures, returning a three-tuple of things-in-a, things-in-b, and things-in-both. The things-in-both is nothing other than the exact matches we are looking for.
Try it at the REPL:

(require '[clojure.data :as data])
(data/diff [:r :g :g :b] [:r :y :y :b])
-> [[nil :g :g] [nil :y :y] [:r nil nil :b]]
The non-nil entries in [:r nil nil :b] are the exact matches when comparing r/g/g/b and r/y/y/b. With diff in hand, the implementation of exact-matches is trivial:

clojurebreaker/src/clojurebreaker/game.clj
(defn exact-matches
"Given two collections, return the number of positions where
the collections contain equal items."

[c1 c2]
(let [[_ _ matches] (data/diff c1 c2)]
(count (remove nil? matches))))
Again, we test at the REPL against an example input:

(exact-matches [:r :g :g :b] [:r :y :y :b])
2
Now let’s turn our attention to the unordered matches. To calculate these, we need to know how many of each colored peg are in the secret and in the guess. This sounds like a job for the frequencies function:

(def example-secret [:r :g :g :b])
(frequencies example-secret)
-> {:r 1, :g 2, :b 1}
(def example-guess [:y :y :y :g])
(frequencies example-guess)
-> {:y 3, :g 1}
To turn those two frequencies into the unordered-matches, we need to do two additional things:
·        Consider only the keys that are present in both the secret and the guess
·        Count only the overlap (i.e., the minimum of the vals under each key)
Again, we hope these operations already exist, and happily they do. You can keep the keys you need with select-keys:

(select-keys (frequencies example-secret) example-guess)
-> {:g 2}
(select-keys (frequencies example-guess) example-secret)
-> {:g 1}
And you can count the overlap between two frequency maps using merge-with:

(merge-with min {:g 1} {:g 2})
-> {:g 1}
Combining frequencies and select-keys and merge-with gives the following definition for unordered-matches:

clojurebreaker/src/clojurebreaker/game.clj
(defn unordered-matches
"Given two collections, return a map where each key is an item
in both collections, and each value is the number of times the
value occurs in the collection with fewest occurrences."

[c1 c2]
(let [f1 (select-keys (frequencies c1) c2)
f2 (select-keys (frequencies c2) c1)]
(merge-with min f1 f2)))
which, of course, we should verify at the REPL:
(unordered-matches [:r :g :g :b] [:y :y :y :g])
-> {:g 1}
That’s nice, with one subtlety. unordered-matches counts matches regardless of order, while the game will want to know only the matches that are not in the right order. Even though the game doesn’t seem to need unordered-matches, writing it was a win because of the following:
·        unordered-matches does exactly one thing. To write a not-ordered match, we would have to reimplement exact-matches inside unordered-matches.
·        The two simple functions we just wrote are exactly the functions we need to compose together to get the not-ordered semantics. Just subtract the results of exact-matches from the results of unordered-matches.
With the two primitives in place, the score operation simply compounds them:

clojurebreaker/src/clojurebreaker/game.clj
(defn score
[c1 c2]
(let [exact (exact-matches c1 c2)
unordered (apply + (vals (unordered-matches c1 c2)))]
{:exact exact :unordered (- unordered exact)}))
And the REPL rejoices:
(score [:r :g :g :b] [:r :y :y :g])
-> {:exact 1, :unordered 1}
At this point, we have demonstrated a partial answer to the question, “What is a good workflow for writing code?” In summary:

·        Break apart the problem to identify pure functions.
·        Learn the standard library so you can find functions already written.
·        Pour no concrete (use data as data).
·        Test inside out from the REPL.
In our experience, programmers trying this workflow for the first time make two typical mistakes:
·        Coding too much
·        Complicating the tests
You have written too much code whenever you don’t understand the behavior of a form, but you haven’t yet tested and understood all of its subforms. Many developers have an intuition of “write X lines of code and then test,” where X is the smallest number of lines that can do something substantial. In Clojure, X is significantly smaller than one, which is why we emphasize building functions inside out at the REPL.
“Complicating the tests” is more subtle, and we will take it up in the next section.

Programming Clojure - Living Without Multimethods


 The best way to appreciate multimethods is to spend a few minutes living without them, so let’s do that. Clojure can already print anything with print/println. But pretend for a moment that these functions do not exist and that you need to build a generic print mechanism. To get started, create a my-print function that can print a string to the standard output stream <<out>>:
src/examples/life_without_multi.clj
(defn my-print [ob]
(.write *out* ob))
Next, create a my-println that simply calls my-print and then adds a line feed:

src/examples/life_without_multi.clj
(defn my-println [ob]
(my-print ob)
(.write *out* "\n"))
The line feed makes my-println’s output easier to read when testing at the REPL. For the remainder of this section, you will make changes to my-print and test them by calling my-println. Test that my-println works with strings: 
(my-println "hello")
| hello
-> nil
That is nice, but my-println does not work quite so well with nonstrings such as nil:

(my-println nil)
-> java.lang.NullPointerException
That’s not a big deal, though. Just use cond to add special-case handling for nil:

src/examples/life_without_multi.clj
(defn my-print [ob]
(cond
(nil? ob) (.write *out* "nil")
(string? ob) (.write *out* ob)))
With the conditional in place, you can print nil with no trouble:
(my-println nil)
| nil
-> nil
Of course, there are still all kinds of types that my-println cannot deal with. If you try to print a vector, neither of the cond clauses will match, and the program will print nothing at all:

(my-println [1 2 3])
-> nil
By now you know the drill. Just add another cond clause for the vector case.
The implementation here is a little more complex, so you might want to separate the actual printing into a helper function, such as my-print-vector:

src/examples/life_without_multi.clj
(require '[clojure.string :as str])
(defn my-print-vector [ob]
(.write *out*"[")
(.write *out* (str/join " " ob))
(.write *out* "]"))
(defn my-print [ob]
(cond
(vector? ob) (my-print-vector ob)
(nil? ob) (.write *out* "nil")
(string? ob) (.write *out* ob)))
Make sure that you can now print a vector:

(my-println [1 2 3])
| [1 2 3]
-> nil
my-println now supports three types: strings, vectors, and nil. And you have a road map for new types: just add new clauses to the cond in my-println. But it is a crummy road map, because it conflates two things: the decision process for selecting an implementation and the specific implementation detail.
You can improve the situation somewhat by pulling out helper functions like my-print-vector. However, then you have to make two separate changes every time you want to a add new feature to my-println:
·        Create a new type-specific helper function.
·        Modify the existing my-println to add a new cond invoking the feature-specific helper.
 
What you really want is a way to add new features to the system by adding new code in a single place, without having to modify any existing code. Clojure offers this by way of protocols,

Saturday, April 21, 2012

Fixing On-Location Flash Photos in PhotshopCS5

Step One:
First, let’s look at the problem: Here’s a shot I took at sunset using an off-camera flash (the flash is up high and to the right of my camera position, aiming down at the subject and firing through a shootthrough umbrella). At this point in the shoot, I didn’t remember to add a CTO gel to warm the light, so the light from the flash is bright white (which looks really out of place in a beach sunset shot like this. The light should be warm, like the light from a setting sun, not a white flash).


Step Two:
To warm the light from the flash, go to the Adjustments panel and click on the Photo Filter icon (it’s the second icon from the right in the middle row). The Photo Filter controls will appear, and from the Filter pop-up menu, choose Orange (as seen here), then increase the Density to around 55%. So, how did I know 55% was right?
I opened a photo from a few minutes later in the shoot, when I had added a CTO gel to my flash, and matched the color and amount, but actually the amount doesn’t matter as much, because we’ll be able to lower it later if it’s too much. The whole image gets the Photo Filter, and it changes the color of the sky, and well…everything, but we just want to change the color of the light.


 Step Three:
What we need to do is hide the overall orange color, and then just apply it where we want it (where the light is actually falling on the subject). To do that, just press Command-I (PC: Ctrl-I) to Invert the layer mask attached to your Photo Filter adjustment layer, so your orange filter is hidden behind a black layer mask.
Now, get the Brush tool (B), press D to switch your Foreground color to white, and paint over your subject’s skin, hair, clothes, and anywhere the light from the flash is falling (as shown here). That way, the orange only affects where the light from the flash lands.




Step Four:
Remember in Step Two where I said I wasn’t worried about the amount because I could change it later? That’s now. Because we used an adjustment layer, we can just go to the Layers panel and lower the Opacity to lower the amount of orange (I lowered it to 64% here). If, instead of needing to lower the amount, you need more orange, then just double-click directly on the adjustment layer itself (in the Layers panel) and it reopens the Photo Filter controls in the Adjustments panel, so you can increase the Density amount. Here’s the final image, with the orange gel effect added in Photoshop.


The Fastest Way to Resize Brushes Ever (Plus, You Can Change Their Hardness, Too) in PhotoshopCS5

Step One:
When you have a Brush tool selected, just press-and-hold Option-Control (PC: Ctrl-Alt) and then click-and-drag (PC: Right-click-and-drag) to the right or left onscreen. A red brush preview will appear inside your cursor (as seen here)—drag right to increase the brush size preview or left to shrink the size. When you’re done, just release those keys and you’re set. Not only is this the fastest way to resize, it shows you more than just the round brush-size cursor—it includes the feathered edges of the brush, so you see the real size of what you’ll be painting with (see how the feathered edge extends beyond the usual round brush size cursor)?
 
TIP: Change Your Preview Color
If you want to change the color of your brush preview, go to Photoshop’s Preferences (Command-K [PC: Ctrl-K]), click on Cursors on the left, and in the Brush Preview section, click on the red Color swatch, which brings up a Color Picker where you can choose a new color.


Step Two:
To change the Hardness setting, you do almost the same thing—press-and-hold Option-Control (PC: Ctrl-Alt), but this time, click-and-drag (PC: Rightclick- and-drag) down to harden the edges, and up to make them softer (here I dragged down so far that it’s perfectly hard-edged now).
 
TIP: Turn on Open GL Drawing
If you don’t see the red brush preview, you’ll need check your preferences first. So, go to Photoshop’s preferences (Command-K [PC: Ctrl-K]), and click on Performance on the left side. In the GPU Settings section near the bottom right, turn on the Enable OpenGL Drawing checkbox, then restart Photoshop.

.


Fixing Dark Eye Sockets in PhotoshopCS5

Step One:
Here’s the image we’re going to work on, and if you look at her eyes, and the eye socket area surrounding them, you can see that they’re a bit dark. Brightening the whites of the eyes would help, but the area around them will still be kind of shadowy, so we may as well kill two birds with one stone, and fix both at the same time.
.

Step Two:
Go to the Layers panel and duplicate the Background layer (the quickest way is just to press Command-J [PC: Ctrl-J]). Now, change the blend mode of this duplicate layer from Normal to Screen
(as seen here). This makes the entire image much brighter.
.

Step Three:
We need to hide the brighter layer from view, so press-and-hold the Option (PC: Alt) key and click on the Add Layer Mask icon at the bottom of the Layers panel (it’s shown circled here in red). This hides your brighter Screen layer behind a black layer mask (as seen here). Now, switch to the Brush tool (B), choose a smallish, soft-edged brush, and paint a few strokes over the dark eye sockets and eyes (as shown here). Now, I know
at this point, it looks like she was out in the sun too long with a large pair of sunglasses on, but we’re going to fix that in the next step.


Step Four:
What brings this all together is lowering the Opacity of this layer, until the parts that you painted over and brightened in the previous step blend in with the rest of her face. This takes just a few seconds to match the two up, and it does an incredibly effective job. See how, when you lower the Opacity to around 35% (which works for this particular photo—each photo and skin tone will be different, so your opacity amount will be, too), it blends right in? Compare this image in Step Four with the one in Step One and you’ll see what I mean. If you’re doing a lot of photos, like high school senior portraits, or bridesmaids at a wedding, this method is much, much faster than fixing everyone’s eyes individually.




.

Getting Acquainted with Enterprise Linux


Enterprise Linux has four versions: Two of the versions are designed for workstation and desktop usage, and the other two versions are designed for server applications. Don’t get too bogged down trying to sort out the differences of these versions because the four versions of Enterprise Linux are really quite similar. In this chapter, I examine the different versions of Red Hat Enterprise Linux and what you can do with them. Before I go into the version descriptions, take a look at the history of Enterprise Linux.

Exploring the History of Enterprise Linux
Red Hat Enterprise Linux is one of many available distributions of Linux. Several companies make their own commercial Linux distributions, but in this book, I discuss the Enterprise Linux distribution by Red Hat. A Linux distribution is a complete version of the Linux operating system that contains the Linux kernel as well as other applications and programs that can be used for doing some type of work. The Linux kernel is the core of the Linux operating system and controls how the operating system functions with the hardware that makes up your PC. (Linux was originally developed by Linus Torvalds in 1991 while he was a college student in Finland.)
I don’t want to bore you with a lot of historical information about Enterprise Linux, but a little background information for a better understanding of the Linux kernel and version numbers is helpful. Exact dates aren’t important, so I’ll just give you the quick rundown of the history of Red Hat Linux and the introduction of Enterprise Linux.
The first publicly available version of Red Hat Linux appeared in the summer of 1994 and was based on kernel version 1.09. (The kernel is identified by a number that refers to the particular version of the kernel.) Since the release of the first version of the Red Hat Distribution, there have been many more releases, with each release improving upon the earlier versions. Red Hat made no distinction between its version’s suitability for home use or commercial (business) use of its distributions until May, 2002. By then, Red Hat was at release 7.3 of the Red Hat Linux distribution. Coinciding with the release of version 7.3 was the introduction of Red Hat Linux Advanced Server 2.1, which was renamed Enterprise Linux 2.1. Enterprise version 2.1 was based on the Red Hat 7.3 version but was intended for commercial/business use. The major difference between the commercial and home versions of Red Hat Linux was in the support offerings available for the versions. The home version, if purchased through a boxed set, gave the user a limited number of technical support calls for a short time period, and then the users were on their own. The commercial version provided a longer time period for technical support and offered additional technical support that could be purchased at additional cost. Also, Red Hat had issued a new version of its operating system about every six months — changing far too often for most commercial uses. With the release of Enterprise Linux 2.1, Red Hat slowed the pace of system changes to give users a more stable platform (thus requiring less frequent updates) and focused its commercial efforts on the Enterprise version.
From this point forward, Red Hat continued development of its home user versions through version 8 and finally version 9, which was the last Red Hat distribution that was available for home user purchase. In the summer of 2003, Red Hat decided that it would merge its open development process with the Fedora Linux project — and the Fedora Project was born.
In October, 2003, Red Hat introduced Enterprise 3 that, like its predecessor Enterprise 2.1, was specifically geared toward business/enterprise users. Enterprise 3 was initially available in three versions — AS, ES, and WS — each designed for specific types of service. In the summer of 2004, Red Hat added another version of Enterprise 3 specifically for the desktop. That brings us to the present — Enterprise version 4 — which is the focus of this book.

Examining the Versions of Red Hat Enterprise
All versions of Enterprise Linux share some similarities in their product features. The most significant of these features are
·        A 12–18 month release cycle
·        A common operating system, applications, and management tools
·        One year of support and updates using the Red Hat Network included with the initial purchase, which is then renewable annually for 5 years for an additional yearly free
Having a 12–18 month release cycle makes the update process more predictable because a user knows that he won’t have to make any major changes to his system configuration for at least a year and perhaps longer. With all versions are based on the same operating system, a system administrator can more easily configure and maintain consistency because the same skill set is used for all versions.
Probably the most significant feature of Enterprise Linux is the level(s) of support available from Red Hat. One of the most frequently heard criticisms of Linux is the lack of user support typically available. With Enterprise 3, and Enterprise version 4 covered in this book, Red Hat has seriously addressed the support issue.
In the following sections, I examine the different versions of Enterprise Linux 4. (For installation details, see Appendix A.) Then I conclude the chapter the remainder of this chapter with what Enterprise Linux can do for you.

Red Hat Enterprise AS
Red Hat Enterprise AS is the top-of-the-line server operating system available from Red Hat. Enterprise AS is designed for large departments or company data centers. The AS version provides the same server functions as the ES version but is best suited for servers that have more than two CPUs with greater than 8GB of system RAM. In addition to support for more than two CPUs in the same system, there is support for many different types of CPUs as well, such as the IBM iSeries, pSeries, and zSeries.
The greatest difference between the AS and ES (see the following section) versions is the level of support available with the AS version. Users can purchase the premium level support option that provides 24/7 support with a guaranteed one-hour response time.

Red Hat Enterprise ES
Red Hat Enterprise ES is intended to provide for an entry-level or midrange server environment with support for up to two CPUs and 8GB of system RAM.
The ES version is quite similar to the AS version (see the preceding section) but is meant for smaller-scale operations and does not provide the same level of support as the AS version. The ES version includes the following applications:
·        Web server
·        Network services (DNS [Domain Name System], DHCP [Dynamic Host Configuration Protocol], firewall security, and more)
·        File/print/mail servers
·        SQL (Structured Query Language) databases

Red Hat Enterprise WS
Red Hat Enterprise WS provides nearly the same functionality as the Desktop version. Included with WS are the same Web browser, office suite, and e-mail client (Firefox, OpenOffice.org 1.1, and Evolution, respectively). The major difference between the WS and Desktop (see the following section) versions is the number of CPUs supported. The WS version supports up to two CPUs, but the Desktop version supports only one.

Red Hat Desktop
According to Red Hat, Enterprise 4 Desktop is “a high-quality, full-featured client system for use in a wide range of desktop deployments where security and manageability are key.” What does this mean to the typical user?
This version focuses on the desktop, containing applications that are used on the desktop. Red Hat Desktop includes a mail client program, similar to MS Outlook, called Evolution. Also included is the Firefox Web browser; a complete office suite, OpenOffice.org 1.1; and GAIM, which is an instant messaging client.
To find out more about some of the applications available in Enterprise Linux.
Third-party productivity applications are also installed by default during the system installation. This is an improvement over earlier versions of Red Hat Linux. Adobe Acrobat Reader, a Macromedia Flash plug-in, RealPlayer, and Java are just a few of the applications that work in Red Hat Desktop right out of the box.
As part of the Enterprise family of programs, Red Hat Desktop shares many of the features and tools of the other Enterprise versions. A user or administrator who is familiar with one of the versions of Enterprise 4 will be able to easily use a different version. Red Hat Desktop supports a system with one CPU and up to 4GB of system RAM.

Putting Enterprise Linux to Work
Whether you’re planning to use the AS or ES server versions of Enterprise Linux or you’ll be using the WS or Desktop versions, the choices of productivity software and what you can do with them are nearly infinite. You can use Enterprise Linux to manage all your system hardware, do system administration, create networks for sharing data, browse the Internet, serve Web pages, and much more. Take a look at just some of the tasks that you can do with Enterprise Linux.

Configuring your local network
All versions of Enterprise Linux include the X Window System (find more on this in Chapter 5), based on XFree86, which provides the foundation for a graphical user interface (GUI). However, you aren’t stuck with just one GUI because Enterprise Linux supplies two well-known GUIs: KDE and GNOME.
·        KDE: The K Desktop Environment is an optional GUI that can be selected
at installation time.
·        GNOME: This is the default GUI that’s installed when the operating
system is installed.
If you have both GUIs installed, a tool on either desktop makes switching between the desktops very easy.
You don’t have to spend additional money to buy typical productivity applications such as word processing or spreadsheet programs. All versions of Enterprise Linux ship with a complete office productivity suite — OpenOffice. org — as well as many other graphical applications that can be used for editing graphics, building Web sites, and much more.
With either desktop, you can use the included graphical-based tools to configure and maintain your systems. You can also configure the hardware in your system and add or remove devices. Additionally, you can configure printers to work with your local network.
Enterprise Linux includes support for many types of printers from different manufacturers. You can configure a printer connected directly to your system as well as many types of network-connected printers.
Enterprise Linux gives you everything you need to set up a local network so that your systems can share data with each other. For example, you can configure the AS and ES versions to provide local network services, such as Network File System (NFS), that shares files between the servers and WS and Desktop clients. Or, you can configure the Network Information System (NIS) to give your users the ability to log in to
the network and use all the network resources.
You will also be able to share data with computers running other operating systems, such as MS Windows, Novell NetWare, or Mac OS X. Enterprise Linux gives you all the tools that you need to configure your system to communicate with these other operating systems and exchange information.

Using Enterprise Linux to maintain your system
Keeping your systems running properly and updated with the latest patches can be a daunting proposition. Don’t worry, though, because Enterprise Linux gives you all the tools that you need to perform these tasks. All versions of Enterprise Linux include a subscription to the Red Hat Network as well as the up-date application that constantly scans your system configuration and installed packages looking for packages that can be updated.
Tools are available in all versions that you can use to create and remove system users and groups. You use these same tools to change properties and permissions for your users and groups as well. Several applications are available for creating file archives for backing up your data. You can compress your data to maximize your storage space and speed up your backup and restore process.
Installing application software in Enterprise Linux is a relatively easy process because most applications are available in the Red Hat Package Manager (RPM) format. You can use the graphical-based RPM tool to install your application, or you can use the rpm command from a command prompt. In many instances, you can either choose to use the graphical based tool or you can use the command line to enter your commands.

Securing your system
Anyone who uses a computer these days is well aware of the increasing problems caused by unsecured systems. Enterprise Linux includes many of the tools that you need to secure your system from malicious attacks.
You can configure a firewall on your system by making a few choices and answering a few questions from the graphical-based firewall tool. If you want to go into more detail with your firewall configuration, you can use the command line firewall tool to create more complex firewall rules. You can protect your systems from internal attacks (attacks that originate inside your organization) as well as external (outside) attacks.
Applications are also available that you can use to actively detect system intrusions. You can configure how your system should respond to intrusions and what actions should be taken to ensure that your systems are not vulnerable to future attacks.

Providing Internet services
You can use Enterprise Linux to serve information across the Internet to users on different networks than your own. The ES and AS versions of Enterprise Linux include the following Internet servers:
 
·        Apache httpd Web server: The Apache Web server is the most widely
used Web server in use today. (See Chapter 15.)
·        FTP server: The vsftpd server is an implementation of the File Transfer
Protocol (FTP) that is used for transferring files across the Internet. (See
Chapter 14.)
·        sendmail: This is the most widely used mail transport agent in use
today. 
You can remotely log in to another computer on your own network or even on the Internet. Using the telnet program, or another more secure program called ssh, makes remote logins easy. After logging in remotely, you can control the remote computer as though you were sitting in front of it.
In Enterprise Linux, all Internet servers are based on the Transmission Control Protocol/Internet Protocol (TCP/IP), which is the protocol on which the Internet is based. Any network applications that use TCP/IP are supported natively by Enterprise Linux. .
As you can see from this quick examination of the features of Enterprise Linux, you can do a lot with it. In fact, anything you can do with the most widely used operating system (MS Windows), you can do as well or better with Enterprise Linux. You systems will certainly be more secure and less vulnerable to attack if you are running Enterprise Linux. The remaining chapters of this book explain in more detail the features briefly discussed in this chapter.

Comparing Enterprise Linux and Fedora Core

In Fall, 2003, Red Hat announced that it would no zonger sell nor support its retail box version of Red Hat Linux. Version 9 would be the last of many versions that I’ve seen over the years.
Instead of continuing this long line of versions, Red Hat announced that it would provide support to the Fedora Project for development of what Red Hat described as a place for testing cuttingedge technology. What this means is that all development efforts for all Red Hat software would go into the Fedora Project and the Fedora software, which is known as Fedora Core. New releases of Fedora Core will occur about every six months, which is far too often for productionbased systems, but allows for testing of features that would appear at some later date in the Enterprise versions. At the same time as the Fedora Project announcement, Red Hat placed nearly all its efforts into promoting its Enterprise Linux product and its features and benefits.
Many people were very confused by this move by Red Hat, and many users had a strong feeling that Red Hat Linux would no longer be available.
This is simply not true. What was known as Red Hat Linux is simply now called Fedora Project.
In my opinion, except for the name change and not being able to purchase a retail box version of Fedora, nothing has really changed as far as the features and functionality of the operating system.
The major advantages of Enterprise Linux over Fedora Core are the number of support options that are available from Red Hat. For many years, one of the biggest reasons given by the corporate world for not using Linux has been a lack of user support. With the promotion of Enterprise Linux, Red Hat has effectively removed lack of support as a reason for a company not to consider using Linux.
Another key feature of Enterprise Linux is the extended development and release cycle for new versions. Red Hat has stated that it plans to release new versions of Enterprise Linux every 12–18 months rather than every 6 months, as had been the case with Red Hat Linux.
However, probably the most significant difference between Fedora Core and Enterprise Linux is the difference in price. Purchasing the AS version of Enterprise Linux with the standard support option cost about $1,500, with the premium support package costing about $2,500. Fedora Core, on the other hand, is free.
What does all this mean to the users of Enterprise Linux or Fedora? Can you use Fedora Core to provide the same services and functionality as Enterprise Linux? The answer is a resounding yes. Users can do everything in Fedora that they can do with Enterprise Linux. This is good news to users of Enterprise Linux as well. Any user who is familiar with Fedora Core can easily make the move to Enterprise Linux because they are nearly identical in features and functionality.





Friday, April 20, 2012

Leveraging the SharePoint - Object Model



SharePoint provides a rich and complex object model for working with SharePoint data. Although it is challenging to master the details of the SharePoint object model, an even greater challenge that many developers face is taking their knowledge of the object model and using it to craft a solution that delivers on a set of requirements. By its nature, SharePoint solutions are often made up of loosely coupled components that combine to deliver a full set of functionality, and it is sometime difficult to figure out how to translate knowledge of the object model into the set of loosely coupled components in SharePoint. Ask three SharePoint developers how to solve a particular, nontrivial problem, and you will likely receive three unique solutions that might all be equally valid.
This chapter attempts to first cover, in broad strokes, the various customization mechanisms that SharePoint exposes to developers and then provides a number of sample problems and describes how they can be solved the “SharePoint Way.” The chapter provides some basic code snippets illustrating how you can implement various features, but because this chapter is more about how components can be plugged together and combined into solutions, the coverage is not exhaustive.

Customizing SharePoint
SharePoint is a powerful platform that offers many points of extension and customization. You can’t cover everything that SharePoint encompasses within a single chapter, but this chapter highlights some of the most common bits of functionality that you are likely to need to implement in your role as a SharePoint developer.

UI Components
Master Pages and Themes
Master Pages and Themes are the primary mechanism in SharePoint to modify the look and feel of SharePoint. Master Pages can best be thought of as controlling the “edges” of a SharePoint page. Everything from the breadcrumb and up and from the quick launch and left is determined by the Out of the Box (OOTB), v4.master Master Page. This could be extended to control items to the right and below the primary content area of SharePoint pages as well. One common change typically implemented by deploying a custom Master Page is the addition of a standard footer to the bottom of pages in SharePoint.
Themes determine the color palette utilized by SharePoint. You can build themes using the Theme Designer available from Site Settings or also by using PowerPoint 2010 themes. In addition to developing a custom theme, you likely also need a custom .CSS file to provide the granular control most branding efforts require.
 
Custom Web Parts
Web parts are the primary UI building block within SharePoint. They are modular elements that you can place on almost any page within SharePoint. SharePoint ships with a large number of web parts, but it will often be necessary to write your own.
Within SharePoint 2010, there are two types of web parts: visual web parts and traditional web parts. Back in the days of SharePoint 2007, there was only a single type of web part, the traditional web part. Unlike most other visual controls within Visual Studio, web parts did not have any sort of WYSIWYG designer, and developers had to build the UI via code. SharePoint 2010 introduces the ability to build visual web parts that enable the developer to use a WYSIWYG designer to build the UI.
Following is code for a traditional, “Hello World!” web part:

using System;
using System.ComponentModel;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using Microsoft.SharePoint;
using Microsoft.SharePoint.WebControls;
namespace Wrox.ObjectModel.TraditionalWebPart
{
[ToolboxItemAttribute(false)]
public class TraditionalWebPart : WebPart

Custom Web Parts
Web parts are the primary UI building block within SharePoint. They are modular elements that you can place on almost any page within SharePoint. SharePoint ships with a large number of web parts, but it will often be necessary to write your own. Within SharePoint 2010, there are two types of web parts: visual web parts and traditional web parts. Back in the days of SharePoint 2007, there was only a single type of web part, the traditional web part. Unlike most other visual controls within Visual Studio, web parts did not have any sort of WYSIWYG
designer, and developers had to build the UI via code. SharePoint 2010 introduces the ability to build visual web parts that enable the developer to use a WYSIWYG designer to build the UI.
Following is code for a traditional, “Hello World!” web part:

using System;
using System.ComponentModel;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using Microsoft.SharePoint;
using Microsoft.SharePoint.WebControls;
namespace Wrox.ObjectModel.TraditionalWebPart
{
[ToolboxItemAttribute(false)]
public class TraditionalWebPart : WebPart
Customizing SharePoint
5
{
protected override void CreateChildControls()
{
Label label = new Label();
label.Text = “Hello World!”;
Controls.Add(label);
}
}
}
As stated earlier, the “visual” aspect of the new Visual Web Part simply refers to the design time experience. Rather than having to programmatically create the controls and adding them to the web part as in the preceding example, a Visual Web Part enables you to drag and drop controls onto the design surface. Figure 1-1 shows what a visual web part looks like in Visual Studio.
.
Figure 1-1: A Visual Web Part design canvas

When placed on a SharePoint page, the two web parts appear to be nearly identical, as shown in  Figure 1-2.
.
Figure 1-2: Web parts

You probably wonder why traditional web parts continue to exist within SharePoint 2010 because Visual Web Parts provide a better design experience. The answer is twofold. The first is to maintain backward compatibility and to allow solutions that can be deployed to both 2007 and 2010 environments. The second is that because of the way Visual Web Parts deploy, they are not supported within Sandboxed Solutions. There are some alternative implementations of Visual Web Parts, such as those found within Microsoft’s Visual Studio 2010 SharePoint Power Tools, which do work for sandboxed solutions.
Web parts are primarily applicable when you need to implement functionality that requires a UI, is meant to be end-user configurable, and is meant for the site owners or designers to place on pages of their choosing. This may not always be the case however.

Custom Application Pages
The other primary UI building block within SharePoint is the custom application page. These are standard .ASPX pages that are deployed beneath the _layouts directory within your SharePoint environment.
Unlike web parts, custom application pages are standalone pages of self-contained functionality.
Users cannot edit the contents of the page or add custom web parts. If you don’t want the user to determine the placement of the control, or you need to link to the page from another solution element (for example, a custom action, web part, and so on), application pages are generally the right solution to the problem.
A common use for custom application pages is for settings pages, which is probably the scenario in which you see them most used within the OOTB SharePoint screens. For example, go to the site settings section of any site in SharePoint, and every one of the settings links take you to an application page.
Creating a custom application page is as simple as selecting Add New Item within your project, and selecting the Application Page type. This creates an .ASPX page beneath the Layouts directory of your project, as shown in Figure 1-3.
.
Figure 1-3: Application page in Visual Studio

As you can see in the screenshot, application pages are just like any traditional .ASPX page written in .NET with a few references added to SharePoint assemblies. Just like the other pages within SharePoint, application pages make use of a Master Page, and content placeholder regions are exposed for you to add content to. Figure 1-4 shows a sample application page.
Application pages are deployed to the Layouts directory and, as such, are not deployed as part of a feature. This means that application pages are accessible from within any site within your farm just by appending \_layouts\<path to page> and SharePoint does not secure them assuming the user has access to the site, so you need to check user access within your own code.
.
Figure 1-4: Application page in SharePoint

Custom Lists
SharePoint Lists provide a flexible and extensible way for users to store data within SharePoint and is frequently be used by developers as a location to store information as well. In many instances, these types of “configuration” lists are hidden from the user, and all interactions with the list data occurs through UI elements such as web parts and application pages, but there are cases, particularly when the user must maintain lists of data that you want to expose the list directly to the user.
In those cases, it may be sufficient to simply link them directly to the list and use the default list UI to allow the users to manage the list items. In more complicated scenarios (consider an example in which you have complex multicolumn validation requirements or in which you have multilevel dropdowns) the OOTB UI provided by lists is insufficient.
 
Custom List Forms
List forms refer to the add, edit, and display forms accessible from SharePoint when a user adds a new list item or clicks on an existing list item and chooses to view or edit it. Typical scenarios in which you might need to customize these would be if you need to add complex validation logic that spans across multiple columns in the list or in which selections made to one column affect those available in another. However, list forms are only one way in which you can enter list data within a list. Users interacting with a list through the datasheet view, through the office information panel, or through Access cannot enjoy the same experience, so you need to account for these usage scenarios.
 
Customizing SharePoint  9
 “Custom Field Types, Content Types and List Definitions,” covers lists and list forms in great detail and illustrates how to create custom list forms, so they will not be covered further here, but 2010 provides a number of mechanisms for customizing these forms: You can use SharePoint Designer, InfoPath, and Visual Studio to create more robust list forms.
 
Custom Field Types
Field types are the basic building blocks of lists and content types. Whenever you add a new column to a SharePoint list, you select from a variety of predefined column types such as Currency, Number, and Choice. By selecting these types, you can decide what types of data can be entered and how that data displays and can be manipulated. Figure 1-5 shows a standard column creation dialog.
.
Figure 1-5: Creating a column in SharePoint

Creating custom field types adds to the list of standard types of new columns and enables you to dictate how information displays and interacts within the standard SharePoint list UI. They are most appropriate when you need to implement rules about the types of data fields can contain or the formatting of the display of your data (masking a Social Security number, for example).
Delegate Controls
SharePoint supports delegate controls, which might best be described as functionality placeholders. Developers can register their own implementation of the functionality that overrides the OOTB implementation. You can embed many delegate controls within the default Master Page and include things such as the search box.
As example of how to embed a delegate control in a Master Page is shown in the following code snippet:

<SharePoint:DelegateControl ControlId=”SmallSearchInputBox”
AllowMultipleControls=”false”/>
This snippet basically tells SharePoint that the developer wants to embed a control called SmallSearchInputBox into this location, but the developer requires no knowledge of the implementation of the control.
Delegate controls can also have a default implementation specified directly within the body as well.
Following is an example in the TopNavigationDataSource delegate control from the v4.master:

<SharePoint:DelegateControl runat=”server”
ControlId=”TopNavigationDataSource” Id=”topNavigationDelegate”>
<Template_Controls>
<asp:SiteMapDataSource
ShowStartingNode=”False”
SiteMapProvider=”SPNavigationProvider”
id=”topSiteMap”
runat=”server”
StartingNodeUrl=”sid:1002”/>
</Template_Controls>
</SharePoint:DelegateControl>
You can override this default implementation of a delegate control by deploying a feature that contains a control definition that uses the same ID specified in the ControlId attribute of the delegate control. The elements file would look something like this:

<?xml version=”1.0” encoding=”utf-8” ?>
<Elements xmlns=”http://schemas.microsoft.com/sharepoint/“>
<Control
Id=”SmallSearchInputBox”
Sequence=”25”
ControlClass=”WroxSearchBox”
ControlAssembly=”Wrox.ObjectModel”/>
</Elements>
The preceding definition basically tells SharePoint that this control (WroxSearchBox) should be used anywhere the SmallSearchInputBox delegate control is requested. The “AllowMultipleControls” attribute of the DelegateControl determines what happens if multiple implementations of the control exist. If it is set to true, all controls will be included in the order of their sequence number (lowest to highest). If it is false, only the control with the lowest sequence number is used.
Delegate controls are most useful when you need to replace a bit of OOTB functionality that exists within multiple locations in SharePoint such as the search box, navigation providers, and so on. Unfortunately, not every control in the OOTB Master Pages uses delegate controls, so their use is limited to specific controls.
Following is a list of the delegate controls that exist within the v4.master Master Page:
 
·        AdditionalPageHead
·        GlobalNavigation
·        GlobalSiteLink0
·        GlobalSiteLink2
·        GlobalSiteLink3
·        PublishingConsole
·        SmallSearchInputBox
·        TopNavigationDataSource
·        QuickLaunchDataSource
·        TreeViewAndDataSource
Some of these delegate controls, such as the TopNavigationDataSource control, are embedded within content placeholders of the Master Page. This means that page layouts can replace the entire contents of this section and remove the delegate control definition. This is an issue covered in more depth when discussing implementing a global navigation solution in Chapter 11, “Building a Custom Global Navigation Solution.”
 
Nonvisual Components
Visual components are only half of the story with most SharePoint customizations. Equally necessary is the ability to implement nonvisual functionality, such as executing code at periodic intervals (a timer job) or executing code after an item is added to a list (event handlers or workflow).

Event Handlers
You can attach event handlers in SharePoint to lists and document libraries and can trigger them before adding, updating, or deleting items or after you add, update, or delete items. The event handlers that occur before the event end in “ing” (for example, ItemAdding), whereas the event handlers that occur after an event occurred end in “ed” (for example, ItemAdded).
Within the “ing” events, the developer can stop the action from occurring. So for example, if you need to implement a rule that enforces that items can be added to a list, but can never be removed, you could do that with an ItemDeleting event handler that cancels the action.
 
Workflow
SharePoint 2007 supported a single type of workflow called a list workflow. These were workflows associated with a list that you could manually trigger when you created an item and when you updated an item. Unlike event handlers, you can trigger list workflows only after an action occurs and only on item addition or update. You cannot trigger event handlers on a deletion.
Because there is considerable functionality overlap, determining when to write a list workflow versus writing an event handler is confusing. Workflows provide some advantages during the creation process because codeless workflows can be authored via SharePoint Designer, and workflow actions exist so that developers don’t need to write custom code. But assuming you need to write code with either implementation, event handlers are best used for short-running, atomic processes that do not require user involvement and are tied to the specific list in question. Event handlers are also the only option if you want to intercept an action before it happens. If the requirement being met involves long-running processes that involve waiting for multiple inputs from users or which are triggered from multiple locations, a workflow is more likely to be the correct solution.
SharePoint 2010 also introduces a new type of workflow called a site workflow. Site workflows are tied to sites rather than individual lists and can be only manually triggered (or via code). Site workflows are primarily useful when the workflow in question is not tied to actions around a particular item in a list. Imagine a scenario in which you need to find a single item in a list based on some characteristic and then email the creator of that item. In that case, because you want the workflow to find the item, a site workflow is appropriate. If instead you want to email the creator of an item every time it updates, a list workflow is appropriate.

Timer Jobs
Timer jobs are SharePoint’s mechanism to enable you to run code on a scheduled basis. You can schedule timer jobs to run anywhere from every minute to every month.
Timer jobs are the preferred mechanism any time you must run code on a scheduled basis or where operations are particularly long running. An example of a sample scenario in which a timer job might be appropriate might be if you have a requirement in which every month you want to scan all MySites sites and find any files older than 90 days to confirm if you can delete them.

Feature Receivers
Feature receivers are just an alternative form of an event receiver tied to SharePoint Features rather than SharePoint Lists. Feature receivers enable the developer to run arbitrary code whenever a feature is installed or removed from a farm/web application/site collection/web, whenever a feature is activated or deactivated, or when a feature is upgraded.
One common use for feature receivers is to register timer jobs deployed by a feature. Following is an example of what code to add for a timer job whenever the feature activates:
 
public override void FeatureActivated(
SPFeatureReceiverProperties properties)
{
SPSite site = (SPSite)properties.Feature.Parent;
CustomTimerJob smbJob = new CustomTimerJob(“My Job Name”, site);
SPMinuteSchedule schedule = new SPMinuteSchedule();
schedule.BeginSecond = 0;
schedule.EndSecond = 59;
schedule.Interval = 2;
smbJob.Schedule = schedule;
smbJob.Update();
}
Another sample use would be if a feature depends on certain data, such as a SharePoint subsite existing, prior to it properly working. Your feature receiver could create the required subsite as part of the activation process.

External Access
Up to this point, this chapter has focused only on solutions that exist within SharePoint and run within the farm, but it is also common to need to access SharePoint content and interact with SharePoint from outside of the farm. One example of such a need would be an application that enables you to scan documents from your desktop and store them directly within SharePoint. SharePoint provides two mechanisms to support external access. The first, which existed in SharePoint 2007 and continues to exist in 2010, are web services. The second is Client Object Model, which is new to SharePoint 2010, which you can access from .NET, Silverlight, or JavaScript. The main advantage to using the Client Object Model over the web services is that rather than having to learn a completely new way to access SharePoint content, developers can reuse much of their knowledge of the server-side object model. There definitely are some differences, but in general, the Client Object Model provides a much more familiar framework.

SharePoint Web Services
SharePoint provides a rich set of web services (both .ASMX and WCF/RESTful) to enable external applications to interact with SharePoint. Although not everything is exposed via web services, a great deal of functionality is. For those writing code in .NET, Silverlight, or Javascript, the Client Object Model introduced in 2010 will likely be the preferred mechanism for interacting with SharePoint, but for developers writing in other languages, web services continue to be the primary mechanism used to interact with SharePoint.
The list of web services SharePoint provides follows:
 
·        Admin
·        Alerts
·        Authentication e
·        BDC Admin
·        Cell Storage
·        Copy
·        Diagnostics
·         Document Workspace
·         Forms
·        Imaging
·        Lists
·        Meetings
·        People
·        Permissions
·        Shared Access
·        Distribution List
·        Site Data
·        Sites
·        Search
·        User/Group
·        Versions
·        Views
·        Web Part Pages
·        Webs
·        Organization Profile Service
·        Published Links
·        Social Data
·        User Profile
Covering each one of these services is outside of the scope of this chapter, but you can benefit from learning the capabilities of these web services. As mentioned, Microsoft recommends using the managed Client Object Model whenever possible instead of the web services. Chapter 4, “Leveraging the SharePoint Lists Web Service,” focuses on just one of these web services, the List Web Service.

Client Object Model
The Client Object Model is a set of APIs that enable you to design custom applications that access SharePoint content. It includes libraries for client applications based on the .NET Framework. This new API is targeted for building things such as console, Windows Forms, and WPF applications. The Client Object Model also includes a library for Silverlight and JavaScript client applications. The Silverlight library is composed of a subset of the object model. These interfaces include the Managed .NET Client Object model, Silverlight Client Object, ECMAScript (JavaScript, Jscript) Client Object Model, and LINQ to SharePoint.
The .NET Client Object Model for SharePoint 2010 is one of the newest APIs for working with SharePoint content. The Client Object Model API enables you to build custom applications in any of the Managed .NET languages using an object-oriented approach. This new API is the ideal method to access and manipulate SharePoint 2010 content from a client application. The new APIs were designed to have counterparts to many of the types of objects that you have been using from the Microsoft .SharePoint server API. However, some objects from the server API provided with limited functionality in the Client Object Model APIs.
Using the new object model is fairly straightforward. You begin by creating a client context object like you would if you were using the server API. From there, you can load, create, and manipulate the core components of SharePoint: sites, webs, lists, and libraries. You can, of course, access the children of these objects. Depending on the client that you develop, you can use methods to synchronously or asynchronously perform operations against these objects, which enables you more control of your application’s user experience. For Silverlight and JavaScript client, you can only asynchronously perform actions, whereas .NET clients enable only synchronous operations. As previously mentioned, Client Object Model APIs exist for .NET, Silverlight, and ECMAScript. When you use the API for any of these languages, the syntax looks similar, but there may be some minor differences depending on the language. In the following sections you learn how to use the Client Object Model to load sites and to manipulate lists and libraries.
 
Working with the ClientContext
The most basic building block of content access from standalone applications to SharePoint is the ClientContext class. This class is similar to the SPContext server class. The ClientContext class is responsible for making connections to sites, executing queries, fetching lists, and performing all other actions in SharePoint 2010.
To create a client context for a SharePoint 2010 site located on the local server, use the follow code:

ClientContext clientContext = new ClientContext(“http://localhost/”)
This code assumes that your user logged in with an account that has SharePoint permissions. This new client context allows you to access SharePoint 2010 objects. This newly created client context will use the credentials of the user running the application. In some scenarios, this can be a problem.
The user account that you need to use for SharePoint could be different from the account you use to log into the computer. In this case you can specify the credentials that your application needs to use.
To replace the default network credentials with custom ones you use code like the following:

NetworkCredential credential = new NetworkCredential(“Administrator”, “Password”,
“TestDomain”);

clientContext.Credentials = credential;
This creates a new network credential and sets the credentials for the client context to them. If you use forms-based authentication, you also need to change the AuthenticationMode property of the ClientContext object to ClientAuthenticationMode.FormsAuthentication to make this work.
Your next step toward accessing content with the Client Object Model is to load a web or multiple webs into the context you have created. The Client Object Model does not load any content until it is explicitly requested. To request that the client context load a SharePoint 2010 client object, you must write lines of code that specify what objects to load. To add objects to load into the context, you first access the property from the client context that you want loaded. Next, you call the Load method on the client context with the property as a parameter. Finally, you must call the ExecuteQuery method on the client context to send the request to SharePoint.
The Client OM enables you to load multiple objects by calling Load multiple times. When this is done, all requests are batched and performed using specialized WCF services. This need to explicitly load objects or queries into the context serves a vital purpose. The Load method consolidates multiple requests together to reduce network traffic and improve performance.

Working with Sites and Webs
Now that you have a client context, look at the things you can do with it. For starters, you can load the root web and its child webs. To do this, use the following code:

Web rootWeb = clientContext.Web;
clientContext.Load(rootWeb);
clientContext.Load(rootWeb.Webs);
clientContext.ExecuteQuery();
This loads the entire web and its properties except for the EffectiveBasePermissions, HasUniqueRoleAssignments, and RoleAssignments properties. If you need to work with any of these properties, you must explicitly request them. If you are concerned with reducing unnecessary data transfer between your client application and the server, you should request only the specific properties that your application will use. For example, if you know that you want to use only the title of a Web site, you would use the following code to load only the title property:

Web oWeb = clientContext.Web;
clientContext.Load(oWeb, web=>web.Title);
clientContext.ExecuteQuery();
Using the Client OM, you can also change many properties of an existing web. For instance, you can update a web and change the title and description. To do so, access the web you want to change, modify the properties, and update the web. Because this is the Client OM, you must always call ExecuteQuery to send the request to the server. Code to do this would look like the following:

Web oWeb = clientContext.Web;
oWeb.Title = “Updated Web Title”;
oWeb.Description = “This is a sample of updating a web”;
oWeb.Update();
You can also create Web site objects with the Client OM. To do this use the WebCreationInformation class. You need to set the various properties on the WebCreationInformation object and add it to the web’s collection. The web’s collection you add the new creation object to becomes the parent web for your new one. To add a new blog site use the following code:

Web oWeb = clientContext.Web;
WebCreationInformation webBlogCreate = new WebCreationInformation();
webBlogCreate.Description = “This is a new Blog Site”;
webBlogCreate.Language = 1033; //English Language code is 1033
webBlogCreate.Title = “New Blog Site”;
webBlogCreate.Url = “newblogsite”;
webBlogCreate.UseSamePermissionsAsParentSite = true;
webBlogCreate.WebTemplate = “BLOG#0”;
Web oNewWeb = oWeb.Webs.Add(webBlogCreate);
clientContext.ExecuteQuery();
In the previous example, the value for the Language property is 1033. This is the Microsoft locale code for the English – United States locale. If you create a site for another locale, you need to replace 1033 with the locale code for your locale. A value for the URL of the new site is set. You do not need to put in a full URL here; this needs to be only the final part of the URL. Finally, set the WebTemplate property. In this case you use the string BLOG#0, but you can use any template in your SharePoint 2010 environment. Because this property is just a string, you can even use a custom web template. Now that you have some understanding of the basic building blocks of the Client Object Model, look at using them to access lists.

Working with Lists and Libraries
Lists and document libraries are a key component of the content stored in SharePoint. Lists and libraries are quite similar in design, but they certainly hold different types of information. Lists generally hold data that you can represent in a spreadsheet or tabular manner, whereas libraries hold documents and metadata about those documents. In the following sections you learn how to work with list and library data. You also learn how to create new lists and libraries.
Working with Lists
If you have worked with SharePoint before, lists are probably not new to you. They are one of the main components for storing data in SharePoint. This section discusses how to create, update, and delete lists and list items.

Managing Lists
Before starting to work with list data, you need to create a custom list to work with. This is certainly not required because most SharePoint 2010 site templates come with predefined lists. It is, however, a common development task to create and modify lists. To create a list, use the client context you created in the last section. You also use a new class called List CreationInformation. To create a list, you create a ListCreationInformation object and set the Title and TemplateType properties. The Title property is just the string title that you want to use for the name of the list.
The TemplateType property is an integer representing the template to use for your new list instance. You can retrieve template types from the ListTemplateType enumeration and cast to an integer, or you can enter them directly as an integer. The following code creates a list using the announcements template and titled Custom Announcements List:

ListCreationInformation listCreationInfo = new ListCreationInformation();
listCreationInfo.Title = “Custom Announcements”;
listCreationInfo.TemplateType = (int)ListTemplateType.Announcements;
List oList = oWebsite.Lists.Add(listCreationInfo);
clientContext.ExecuteQuery();
            You can also update properties of the list with the Client OM API. To do this you need to load the list using the client context. You can do this with a LINQ to objects query or CAML query, or by calling the GetByTitle method on the clientContext.Webs.Lists object and passing the title of the list. The GetByTitle approach has the benefit of loading only the list that you modify instead of loading all the lists in your query. After you load the list, you can set the property that you want to update, and call the Update method on the list object. Of course, you also must call ExecuteQuery on the client context to send the update to SharePoint, as follows:

List oList = clientContext.Web.Lists.GetByTitle(“Custom Announcements”);
oList.Description = “This is the new Custom Announcements List”;
oList.Update();
clientContext.ExecuteQuery();
Another common list management task is adding a field to an existing list. To perform this task, you need to get a reference to the list object that you want to add the field to. You can add a field with any of the types available from the user SharePoint 2010 interface. You can then use the AddField or AddFieldAsXml method to add a field. If you need to add attributes to your new field, you can also do that. The following code creates a new number field and sets the minimum and maximum values for the field.

List oList = clientContext.Web.Lists.GetByTitle(“Custom Announcements”);
Field oField = oList.Fields.AddFieldAsXml(
“<Field DisplayName=’Percent Complete’ Type=’Number’/>”,
true, AddFieldOptions.DefaultValue);
FieldNumber fieldNum = clientContext.CastTo<FieldNumber>(oField);
fieldNum.MaximumValue = 100;
fieldNum.MinimumValue = 0;
fieldNum.Update();
clientContext.ExecuteQuery();
The final list management task discussed is deleting a list. Deleting a list with the Client OM is simple.
As with adding a field or modifying a list, you start by getting a reference to the list you want to delete.
Then you call the DeleteObject method on that object. Finally, you call ExecuteQuery on the clientcontext to perform the request.

List oList = clientContext.Web.Lists.GetByTitle(“Custom Announcements”);
oList.DeleteObject();
clientContext.ExecuteQuery();
Now that you have learned how to create, modify, and delete SharePoint lists, you learn how to work with list data. As you will see in the next section, this is similar to the creating of lists from this section.
 
Adding List Items
Creating list items with the Client OM is straightforward. To create a list item and add it to the list, you use the ListItemCreationInformation class. After you instantiate a ListItemCreationInformation object, you need to add it to a list object using the AddItem method on the list. Of course, that means that you need to get or load the list that you want to modify. With the list item creation information added to the list, you can set the various field values for the list item and call the Update method on the list item. Finally, call the ExecuteQuery method on the client context to send your changes to the SharePoint server. The code to perform follows:

List oList = clientContext.Web.Lists.GetByTitle(“Custom Announcements”);
ListItemCreationInformation itemCreateInfo = new ListItemCreationInformation();
ListItem oListItem = oList.AddItem(itemCreateInfo);
oListItem[“Title”] = “New List Item”;
oListItem[“Body”] = “This is my new List Item”;
oListItem.Update();
clientContext.ExecuteQuery();

Updating List Items
Updating list items is easy with the Client OM. All you need to do is get the list item you want to update, set the fields to update, and call the Update method on the list item. The list properties are accessed and modified with the indexer for the list item in the same way as if you were modifying a dictionary object. The following code gets the third list item in the Custom Announcements list and updates the Title property:

List oList = clientContext.Web.Lists.GetByTitle(“Custom Announcements”);
ListItem listItem = oList.Items.GetById(3);
listItem[“Title”] = “My Updated Title.”;
listItem.Update();
clientContext.ExecuteQuery();
You can also update more than one list item at a time with only a single call to execute the query. To do this, you need to get the list items that you want to update. The easiest way to get multiple items is to use the GetItems method. This method takes a CamlQuery as a parameter. To illustrate this, you can load all the items that have a completed percentage of 50 and update them to 100 like this:

CamlQuery query = new CamlQuery();
query.ViewXml = “<View><Query><Where><Eq>”+
“<FieldRef Name=’Percent Complete’/><Value Type=’Number’>” +
“50</Value></Eq></Where></Query>” +
“<RowLimit>2</RowLimit></View>”;
ListItemCollection collListItem = list.GetItems(query);
context.Load(collListItem);
context.ExecuteQuery();
foreach (SPCL.ListItem item in collListItem)
{
item[“Percent Complete”] = 100;
item.Update();
}
context.ExecuteQuery();

Deleting List Items
Deleting a list item is similar to deleting a list. All you need to do is get the list item you want to delete and call the DeleteObject method on the list item. The following code gets the third list item in the Custom Announcements list and deletes it:

List oList = clientContext.Web.Lists.GetByTitle(“Custom Announcements”);
ListItem listItem = oList.Items.GetById(3);
listitem.DeleteObject()
clientContext.ExecuteQuery();
You can also delete more than one list item at a time with only a single call to execute the query. To do this, you need to get the list items that you want to delete. As you saw in the previous section, the easiest way to get multiple items is to use the GetItems method. The following code fetches all list items that have Percent Complete of 100 and deletes them:

CamlQuery query = new CamlQuery();
query.ViewXml = “<View><Query><Where><Eq>”+
“<FieldRef Name=’Percent Complete’/><Value Type=’Number’>” +
“100</Value></Eq></Where></Query>” +
“<RowLimit>2</RowLimit></View>”;
ListItemCollection collListItem = list.GetItems(query);
context.Load(collListItem);
context.ExecuteQuery();
foreach (SPCL.ListItem item in collListItem)
{
item.DeleteObject();
}
context.ExecuteQuery();

Querying Lists
Now that you know how to access lists, consider how to access subsets of data. There are two ways to query lists with the Client OM. You can write a LINQ to objects query, or you can create a CAML query. Because you can create queries for the Client OM in two ways, you probably wonder how to decide when to use one or the other. As a general rule LINQ style queries are much easier to create. If you have used LINQ to SQL, you are already familiar with the basics. So then why use a CAML query? CAML style queries are not as easy to create as LINQ to objects queries. CAML queries are, however, much faster with the Client OM than LINQ queries are. This performance improvement is because LINQ queries are performed on a frontend web server, whereas CAML queries are performed directly against the database. In addition, you cannot query some objects with LINQ.
For instance, you cannot query list items with LINQ directly. In this case you must perform a CAML query.
The LINQ queries are not using the new LINQ to SharePoint provider. LINQ to SharePoint is available only when you program against the server object model, which is discussed later in this chapter.

Querying with LINQ
If you are familiar with LINQ syntax, you probably know that LINQ can be expressed as method syntax or query syntax. For those of you new to LINQ, a brief explanation of LINQ syntax is given. Query syntax, also called queryable load, is the style of LINQ that looks similar to SQL expressions. It contains familiar SQL keywords such as from, in, where, and select, but the syntax does slightly differ. Still, those familiar with writing SQL queries can quickly pick up the new syntax. If you want to write a LINQ expression in query syntax to load the list in the root web with the name Announcements, it would look like this:

var query = from list
in clientContext.Web.Lists
where list.Title == “Announcements”
select list;
var result = clientContext.LoadQuery(query);
clientContext.ExecuteQuery();
Method syntax, however, looks different. A LINQ expression written in method syntax, also called
in-place load, looks like multiple method calls. The previous query in method syntax would look
like this:
clientContext.Load(clientContext.Web,
website => website.Lists.Include(
list => list.Title).Where(
list => list.Title == “Announcements”));
clientContext.ExecuteQuery();
You have seen the two styles of LINQ that you can use. Why is this important? The style of LINQ that you use determines how the Client OM loads your objects. If you use the query style syntax, the results of your query are stored in an object instead of in the client context. This is called a queryable load. The method syntax query causes the Client OM to perform an in-place load, meaning that the objects load in the client context. In-place loads keep data in the client context through subsequent loads. In a queryable load, you are responsible for keeping the results in the application because they are not stored in the client context. The benefit of the queryable load is that you can perform multiple queries that each return different results and keep the data separate. If you perform multiple queries using the in-place loads and then you looped through the results in the client context, you would loop over records that do not match your latest query.

Querying with CAML
Querying items with LINQ is nice and easy as you have seen. Unfortunately, you cannot query all the objects in the Client OM using LINQ. For example, you cannot query list items with LINQ. If you want to query only list items with LINQ to objects, your only option is to pass an empty query to the GetItems method and then work with the result. This is not considered to be a good practice with the Client OM. You should not create queries to return too many records. Microsoft recommends against returning 2,000 or more records. This causes a large amount of traffic to cross the wire, and realistically the user experience of a grid with so many records would not be good. If you need to work with so many records, your user interface is probably going to show them in some kind of paged view.
Querying with CAML queries gives you the ability to execute queries and return the results in pages.
To use this built-in paging, use a class called ListItemCollectionPosition. Initially, you set it to null so that the CAML query starts at the first item. You create a new CamlQuery object and set the ViewXml property. Now here is the first part of paged results with CAML. You set the RowLimit in the CamlQuery object to the number of items you want returned at once. To set up the client request, you create a new ListItemCollection and set it equal to list.GetItems(camlQuery). In this example, list is the List Client OM object that you query and camlQuery is the CamlQuery Client OM object representing your query.

ListItemCollectionPosition itmPosition = null;
CamlQuery query = new CamlQuery();
Query.ListItemCollectionPosition = itmPosition;
while(true)
{
query.ViewXml = “<View><ViewFields>” +
“<FieldRef Name=’Title’/>”+
“<RowLimit>10</RowLimit></View>”;
ListItemCollection collListItem = list.GetItems(query);
context.Load(collListItem);
context.ExecuteQuery();
itmPosition = collListItem.ListItemCollectionPosition;
foreach(ListItem item in collListItem)
Console.WriteLine(“Title: {0}”,item[“Title”]);
if(itmPosition == null
break;
}

Working with Libraries
If you are somewhat new to SharePoint, you might not be aware of the relationship between lists and document libraries. For those new to SharePoint, a document library is a special type of list that, in addition to containing metadata, contains actual files and folders. Because of this relationship accessing lists and accessing libraries is similar.
 Managing Libraries
Because libraries are specialized lists, all the list management operations previously described apply to libraries. Some additional management tasks are specific to libraries. For instance, libraries can be versioned; individual documents must be checked in and out. Documents are published and unpublished while this operation doesn’t make sense of list items. In this section, you learn how to add and upload documents, work with file versions, and work with publishing.
Adding Documents
Adding items to document libraries is a little more complicated than adding list items to lists.
One major item is getting the actual file bytes into the document library. There are two methods to do this, and each have their own issues. In the first method you read the bytes into a FileCreationInformation object and add that object into a document library. This returns a SharePoint Client OM File object. With that file object you call the Load method on the ClientContext that causes the file to upload. After that you call the ExecuteQuery method on the ClientContext. Finally, you can get the ListItem that the file is associated with by accessing the ListItemAllFields method on the SharePoint File object that you just loaded. At this point, you have a list item, and you can modify the metadata properties and update the list item.
The code for this would look like:

ClientContext context = new ClientContext(“http://localhost/”);
Web web = ctx.Web;
FileCreationInformation newFile = new FileCreationInformation();
newFile.Content = System.IO.File.ReadAllBytes(@”C:\TestFile.doc”);
newFile.Url = “/” + fileName;
List docs = web.Lists.GetByTitle(“Shared Documents”);
File uploadFile = docs.RootFolder.Files.Add(newFile);
context.Load(uploadFile);
context.ExecuteQuery();
SPClient.ListItem item = uploadFile.ListItemAllFields;
//Set the metadata
string docTitle = string.Empty;
item[“Title”] = docTitle;
item.Update();
context.ExecuteQuery();
This approach is relatively straightforward and makes it simple to set the document metadata. It is also good for creating folders in libraries. This approach does, however, come with a major issue.
Using the FileCreationInformation approach works only for files that are not too large. You get server errors when the file byte size is larger than the size that your site is configured to use. Although there are ways to change this setting, there is actually a better way to add and upload documents.
The second method is to utilize the WebDAV feature of SharePoint by calling the File SaveBinaryDirect method on the Client OM File class. This method takes a client context, server relative file path, stream object, and boolean flag indicating if the method should replace an existing file.
 
context.Load(list.RootFolder,item => item.ServerRelativeUrl);
context.ExecuteQuery();
string path = list.RootFolder.ServerRelativeUrl + “/“;
using (FileStream fs = new FileStream(txtFilename.Text, FileMode.Open))
{
SPCL.File.SaveBinaryDirect(context, path + filename, fs, true);
}
string file = path + filename.Replace(“\\“,”“);
SPCL.CamlQuery query = new SPCL.CamlQuery();
query.ViewXml = “<View><Query><Where><Eq><FieldRef Name=’FileRef’/>” +
“<Value Type=’Text’>” + file +
“</Value></Eq></Where></Query>” +
“<RowLimit>2</RowLimit></View>”;
SPCL.ListItemCollection collListItem = list.GetItems(query);
context.Load(collListItem);
context.ExecuteQuery();
if (collListItem.Count == 1)
{
collListItem[0][“Title”] = txtTitle.Text;
collListItem[0].Update();
context.ExecuteQuery();
}
This chapter just touched the surface of the things that you can do with the Client Object Model but has hopefully enabled you to understand the power exposed by it.