Saturday, July 14, 2012

Joomla!™ 1.6!


What Is a Content Management System?
What exactly is a content management system (CMS)? To better understand the power of a CMS, you need to understand a few things about traditional web pages.
Conceptually, there are two aspects to a web page: its content and the presentation of that content. Over the past decade, there has been an evolution in how these two pieces interact:
  • Static web pages—The content and presentation are in the same file
  • Web pages with Cascading Style Sheets (CSS)—The content and presentation are separated.
  • Dynamic web pages—Both content and presentation are separated from the web page itself.
     
Static Web Pages
A web page is made up of a set of instructions written in Hypertext Markup Language (HTML) that tells your browser how to present the content of a web page. For example, the code might say, “Take this title ‘This is a web page,’ make it large, and make it bold.”
This way of creating a web page is outdated, but an astonishing number of designers still create sites using this method. Pages created using this method have two main drawbacks:
  • Difficult to edit and maintain—All the content shown on the page (“This is a web page”) and the presentation (big and bold) are tied together. If you want to change the color of all your titles, you have to make changes to all the pages in your site to do so.
  • Large file sizes—Because each bit of content is individually styled, the pages are big, which means they take a long time to load. Most experts agree that large file sizes hurt your search engine optimization efforts because most search engines tend not to completely index large pages.
Web Pages with CSS
    In an effort to overcome the drawbacks of static web pages, over the past four or five years, more comprehensive web standards have been developed. Web standards are industrywide “rules” that web browsers such as Internet Explorer and Mozilla Firefox follow (to different degrees, some better than others) to consistently output web pages onto your screen. One of these standards involves using Cascading Style Sheets (CSS) to control the visual presentation of a web page. CSS is a simple mechanism for adding
style (for example, fonts, colors, spacing) to web documents. All this presentation information is usually contained in files that are separate from the content and reusable across many pages of a site.
    Now the file containing the content is much smaller because it does not contain presentation or style information. All the styling has been placed in a separate file that the browser reads and applies to the content to produce the final result.
Using CSS to control the presentation of the content has big advantages:
  •  Maintaining and revising the page is much easier. If you need to change all the title colors, you can just change one line in the CSS file.
  •  Both files are much smaller, which allows the data to load much more quickly than when you create web pages using HTML.
  •  The CSS file will be cached  (saved) on a viewer's local computer so that it won't need to be downloaded from the Web each time the viewer visits a different page that uses that uses the same styling rules.
Dynamic Web Pages
    A CMS further simplifies web pages by creating dynamic web pages. Whereas CSS separates presentation from content, a CMS separates the content from the page. Therefore, a CMS does for content what CSS does for presentation. It seems that between CSS and a CMS, there’s nothing left of a web page, but in reality what is left can be thought of as insertion points, or placeholders, in a structural template or layout.
          The “put some content here” instruction tells the CMS to take some content from a database, the “pure content,” and place it in a designated place on the page. So what’s so useful about that trick? It’s actually very powerful: It separates out the responsibilities for developing a website. A web designer can be concerned with the presentation or style and the placement of content within the design layout - the placeholders. This means that nontechnical people can be responsible for the content - the words and pictures of a website - without having to know any code languages, such as HTML and CSS, or worry about the aesthetics of how the content will be displayed. Most CMSs have built-in tools to manage the publication of content.
          It’s possible to imagine a workflow for content management that involves both designers and content authors.
A CMS makes the pages dynamic. A page doesn’t really exist until you follow a link to view it, and the content might be different each time you view it. This means a page’s content can be updated and customized based on the viewer’s interactions with the page. For example, if you place an item in a shopping cart, that item shows up on the shopping cart page. It was stored in a database and now gets inserted into the “shopping cart placeholder.” Many complex web applications—for example, forums, shopping carts, and guest books, to name a feware in fact mini CMSs (by this definition).

The Joomla! Community
A large and active community is an important factor in the success of an open source project. The Joomla community is both big and active. The official forum at forum. joomla.org is perhaps one of the biggest forum communities on the Web. In addition, there are many forums on Joomla’s international sites and the respective sites of its third-party extension developers.

Third-Party Extensions Development
    Joomla is unique among open source CMSs in the number and nature of the nonofficial developers who create extensions for it. It’s hard to find a Joomla site that doesn’t use at least one extension. The true power of Joomla lies in the astonishing range of extensions that are available.
    The nature of Joomla developers is interesting. There are an unusually high proportion of commercial developers and companies creating professional extensions for Joomla. Although open source and commercial development might seem unlikely bedfellows, many commentators have pointed to this characteristic of the Joomla project as a significant contributor to its growth.

Joomla!’s Features
Joomla has a number of “out of the box” features. When you download Joomla from www.joomlacode.org, you get a zip file about 5MB that needs to be installed on a web server. Running an installation extracts all the files and enters some “filler” content into the database. In no particular order, the following are some of the features of the base installation:
  • Simple creation and revision of content using a text editor from the main front-end website or through a nonpublic, back-end administration site
  • User registration and the ability to restrict viewing of pages based on user level
  • Control of editing and publishing of content based on various admin user levels 
  • Simple contact forms
  • Public site statistics
  • Private detailed site traffic statistics
  • Built-in sitewide content search functionality
  • Email, PDF, and print capability
  • RSS (and other) syndication
  • Simple content rating system
  • Display of newsfeeds from other site
As you can see, Joomla has some tremendous features. To have a web designer create all these features for a static site would cost tens of thousands of dollars, but it doesn’t stop there. Joomla has a massive community of developers worldwide (more than 30,000) who have contributed more than 5,000 extensions for Joomla, most of which are free.
The following are some of the most popular extension types:
  • Forums
  • Shopping carts
  • Email newsletters
  • Calendars
  • Document and media download managers
  • Photo galleries
  • Forms
  • User directories and profiles
Each extension can be installed into Joomla to extend its functionality in some manner. Joomla has been very popular partly because of the availability of the huge and diverse range of extensions.
To customize your site further, you can easily find highly specialized extensions, such as the following:
  • Recipe managers
  • Help/support desk management
  • Fishing tournament tracking
  • AdSense placement
      


Thursday, April 26, 2012

Testing the Scorer a Clojurebreaker Game


In the previous section, we developed the score function iteratively at the REPL and saw it work correctly with a few example inputs. It doesn’t take much commitment to quality to want to do more validation than that! Let’s begin by teasing apart some of the things that people mean when they say “testing.”
Testing includes the following:
·        Thinking through whether the code is correct
·        Stepping through the code in a development environment where you can
see everything that is happening
·        Crafting inputs to cover the various code paths
·        Crafting outputs to match the crafted inputs
·        Running the code with various inputs
·        Validating the results for correctness
·        Automating the validation of results
·        Organizing tests so that they can be automatically run for regression purposes in the future
This is hardly an exhaustive list, but it suffices to make the point of this section. In short, testing is often complex, but it can be simple. Traditional unit-testing approaches complect many of the testing tasks listed earlier. For example, input, output, execution, and validation tend to be woven together inside individual test methods. On the other hand, the minimal REPLtesting we did before simply isn’t enough. Can we get the benefit of some of the previous ideas of testing, without the complexity of unit testing? Let’s try.
 
Crafting Inputs
We have already seen the score function work for a few handcrafted inputs. How many inputs do we need to convince ourselves the function is correct? In a perfect world, we would just test all the inputs, but that is almost always computationally infeasible. But we are lucky in that the problem of scoring the game is essentially the same for variants with a different number of colors or a different number of pegs. Given that, we actually can generate all possible inputs for a small version of the game.
The branch of math that deals with the different ways of forming patterns is called enumerative combinatorics. It turns out that the Clojure library math.combinatorics has the functions we need to generate all possible inputs.
Add the following form under the :dependencies key in your project.clj file, if it isnot already present:

[org.clojure/math.combinatorics "0.0.1"]
The selections function takes two arguments (a collection and a size), and it returns every structure of that size made up of elements from that collection.
Try it for a tiny version of Clojurebreaker with only three colors and two columns:

(require '[clojure.math.combinatorics :as comb])
(comb/selections [:r :g :b] 2)
-> ((:r :r) (:r :g) (:r :b)
(:g :r) (:g :g) (:g :b)
(:b :r) (:b :g) (:b :b))
So, selections can give us a possible secret or a possible guess. What about generating inputs to the score function? Well, that is just selecting two selections from the selections:
 
(-> (comb/selections [:r :g :b] 2)
(comb/selections 2))
-> (81 pairs of game positions omitted for brevity)
Let’s put that into a named function:

clojurebreaker/src/clojurebreaker/game.clj
(defn generate-turn-inputs
"Generate all possible turn inputs for a clojurebreaker game
with colors and n columns"
[colors n]
(-> (comb/selections colors n)
(comb/selections 2)))
All right, inputs generated. We are going to skip thinking about outputs (for reasons that will become obvious in a moment) and turn our attention to running the scorer with our generated inputs.
 
Running a Test
We are going to write a function that takes a sequence of inputs and reports a sequence of inputs and the result of calling score. We don’t want to commit (yet) to how the results of this test run will be validated. Maybe a human will read it. Maybe a validator program will process the results. Either way, a good representation of each result might be a map with the keys secret, guess, and score.
All this function needs to do is call score and build the collection of responses:

clojurebreaker/src/clojurebreaker/game.clj
(
defn score-inputs
"Given a sequence of turn inputs, return a lazy sequence of
maps with :secret, :guess, and :score."
[inputs]
(map
(fn [[secret guess]]
{:secret (seq secret)
:guess (seq guess)
:score (score secret guess)})
inputs))
Try it at the REPL:

(->> (generate-turn-inputs [:r :g :b] 2)
(score-inputs))
-> ({:secret (:r :r), :guess (:r :r),
:score {:exact 2, :unordered 0}}
{:secret (:r :r), :guess (:r :g),
:score {:exact 1, :unordered 0}}
;; remainder omitted for brevity
If a human is going to be reading the test report, you might decide to format a text table instead, using score print-table. While we are at it, let’s generate a bigger game (four colors by four columns) and print the table to a file:

(use 'clojure.pprint)
(require '[clojure.java.io :as io])
(with-open [w (io/writer "scoring-table")]
(binding [*out* w]
(print-table (->> (generate-turn-inputs [:r :g :b :y] 4)
(score-inputs)))))
-> nil
If you look at the scoring-table file, you should see 65,536 different secret/guess combinations and their scores.
 
Validating Outputs
At this point, it is obvious why we skipped crafting the outputs. The program has done that for us. We just have to decide how much effort to spend validating them. Here are some approaches we might take:
·        Have a human code-breaker expert read the entire output table for a small variant of the game. This has the advantage of being exhaustive but might miss logic errors that show up only in a larger game.
·        Pick a selection of results at random from a larger game and have a human expert verify that.
 
Because the validation step is separated from generating inputs and running the program, we can design and write the various steps independently, possibly at separate times. Moreover, the validator knows nothing about how the inputs were generated. With unit tests, the inputs and outputs come from the same programmer’s brain at about the same time. If that programmer is systematically mistaken about something, the tests simply encode mistakes as truth. This is not possible when the outputs to validate are chosen exhaustively or randomly.
We will return to programmatic validation later, but first let’s turn to regression testing.
 
Regression Testing
How would you like to have a regression suite that is more thorough than the validation effort you have made? No problem.
·        Write a program whose results should not change.
·        Run the program once, saving the results to a (well-named!) file.
·        Run the program again every time the program changes, comparing with the saved file.

 If anything is different, the program is broken.
The nice thing about this regression approach is that it works even if you never did any validation of results. Of course, you should still do validation, because it will help you narrow down where a problem happened. (With no validation, the regression error might just be telling you that the old code was broken and the new code fixed it.)
How hard is it write a program that should produce exactly the same output?
Call only pure functions from the program, which is exactly what our scoreinputs function does.
Wiring this kind of regression test into a continuous integration build is not difficult. If you do it, think about contributing it to whatever testing framework you use.
Now we have partially answered the question, “How do I make sure that my code is correct?” In summary:
·        Build with small, composable pieces (most should be pure functions).
·        Test forms from the inside out at the REPL.
·        When writing test code, keep input generation, execution, and output validation as separate steps.
This last idea is so important that it deserves some library support. So, before we move on, we are going to introduce test.generative, a library that aspires to bring simplicity to testing.
 

Scoring a Clojurebreaker Game


As a Clojure programmer, one question you will often ask is, “Where do I need state to solve this problem?” Or, better yet, “How much of this problem can I solve without using any state?” With Clojurebreaker (and with many other games), the game logic itself is a pure function. It takes a secret and a guess and returns a score. Identifying this fact early gives us two related advantages:
·        The score function will be trivial to write and test in isolation.
·        We can comfortably proceed to implement score without even thinking about how the rest of the system will work.
Scoring itself divides into two parts: tallying the exact matches and tallying the matches that are out of order. Each of these parts can be its own function. Let’s start with the exact matches. To make things concrete, we will pick a representation for the pegs that facilitates trying things at the REPL: the four colors :r (red), :g (green), :b (blue), and :y (yellow). The function will return the count of exact matches, which we can turn into black pegs in a separate step later. Here is the shell of the function we think we need:

clojurebreaker/snippets.clj
(defn exact-matches
"Given two collections, return the number of positions where
the collections contain equal items."
[c1 c2])
Hold the phone—that doc string doesn’t say anything about games or colors or keywords. What is going on here? While some callers (e.g., the game) will eventually care about the representation of the game state, exact-matches doesn’t need to care. So, let’s keep it generic. A key component of responsible Clojure design is to think in data, rather than pouring object concrete at every opportunity.
When described as a generic function of data, exact-matches sounds like a function that might already exist. After searching through the relevant namespaces (clojure.core and clojure.data), we discover that the closest thing to exact-matches is clojure.data’s diff. diff recursively compares two data structures, returning a three-tuple of things-in-a, things-in-b, and things-in-both. The things-in-both is nothing other than the exact matches we are looking for.
Try it at the REPL:

(require '[clojure.data :as data])
(data/diff [:r :g :g :b] [:r :y :y :b])
-> [[nil :g :g] [nil :y :y] [:r nil nil :b]]
The non-nil entries in [:r nil nil :b] are the exact matches when comparing r/g/g/b and r/y/y/b. With diff in hand, the implementation of exact-matches is trivial:

clojurebreaker/src/clojurebreaker/game.clj
(defn exact-matches
"Given two collections, return the number of positions where
the collections contain equal items."

[c1 c2]
(let [[_ _ matches] (data/diff c1 c2)]
(count (remove nil? matches))))
Again, we test at the REPL against an example input:

(exact-matches [:r :g :g :b] [:r :y :y :b])
2
Now let’s turn our attention to the unordered matches. To calculate these, we need to know how many of each colored peg are in the secret and in the guess. This sounds like a job for the frequencies function:

(def example-secret [:r :g :g :b])
(frequencies example-secret)
-> {:r 1, :g 2, :b 1}
(def example-guess [:y :y :y :g])
(frequencies example-guess)
-> {:y 3, :g 1}
To turn those two frequencies into the unordered-matches, we need to do two additional things:
·        Consider only the keys that are present in both the secret and the guess
·        Count only the overlap (i.e., the minimum of the vals under each key)
Again, we hope these operations already exist, and happily they do. You can keep the keys you need with select-keys:

(select-keys (frequencies example-secret) example-guess)
-> {:g 2}
(select-keys (frequencies example-guess) example-secret)
-> {:g 1}
And you can count the overlap between two frequency maps using merge-with:

(merge-with min {:g 1} {:g 2})
-> {:g 1}
Combining frequencies and select-keys and merge-with gives the following definition for unordered-matches:

clojurebreaker/src/clojurebreaker/game.clj
(defn unordered-matches
"Given two collections, return a map where each key is an item
in both collections, and each value is the number of times the
value occurs in the collection with fewest occurrences."

[c1 c2]
(let [f1 (select-keys (frequencies c1) c2)
f2 (select-keys (frequencies c2) c1)]
(merge-with min f1 f2)))
which, of course, we should verify at the REPL:
(unordered-matches [:r :g :g :b] [:y :y :y :g])
-> {:g 1}
That’s nice, with one subtlety. unordered-matches counts matches regardless of order, while the game will want to know only the matches that are not in the right order. Even though the game doesn’t seem to need unordered-matches, writing it was a win because of the following:
·        unordered-matches does exactly one thing. To write a not-ordered match, we would have to reimplement exact-matches inside unordered-matches.
·        The two simple functions we just wrote are exactly the functions we need to compose together to get the not-ordered semantics. Just subtract the results of exact-matches from the results of unordered-matches.
With the two primitives in place, the score operation simply compounds them:

clojurebreaker/src/clojurebreaker/game.clj
(defn score
[c1 c2]
(let [exact (exact-matches c1 c2)
unordered (apply + (vals (unordered-matches c1 c2)))]
{:exact exact :unordered (- unordered exact)}))
And the REPL rejoices:
(score [:r :g :g :b] [:r :y :y :g])
-> {:exact 1, :unordered 1}
At this point, we have demonstrated a partial answer to the question, “What is a good workflow for writing code?” In summary:

·        Break apart the problem to identify pure functions.
·        Learn the standard library so you can find functions already written.
·        Pour no concrete (use data as data).
·        Test inside out from the REPL.
In our experience, programmers trying this workflow for the first time make two typical mistakes:
·        Coding too much
·        Complicating the tests
You have written too much code whenever you don’t understand the behavior of a form, but you haven’t yet tested and understood all of its subforms. Many developers have an intuition of “write X lines of code and then test,” where X is the smallest number of lines that can do something substantial. In Clojure, X is significantly smaller than one, which is why we emphasize building functions inside out at the REPL.
“Complicating the tests” is more subtle, and we will take it up in the next section.

Programming Clojure - Living Without Multimethods


 The best way to appreciate multimethods is to spend a few minutes living without them, so let’s do that. Clojure can already print anything with print/println. But pretend for a moment that these functions do not exist and that you need to build a generic print mechanism. To get started, create a my-print function that can print a string to the standard output stream <<out>>:
src/examples/life_without_multi.clj
(defn my-print [ob]
(.write *out* ob))
Next, create a my-println that simply calls my-print and then adds a line feed:

src/examples/life_without_multi.clj
(defn my-println [ob]
(my-print ob)
(.write *out* "\n"))
The line feed makes my-println’s output easier to read when testing at the REPL. For the remainder of this section, you will make changes to my-print and test them by calling my-println. Test that my-println works with strings: 
(my-println "hello")
| hello
-> nil
That is nice, but my-println does not work quite so well with nonstrings such as nil:

(my-println nil)
-> java.lang.NullPointerException
That’s not a big deal, though. Just use cond to add special-case handling for nil:

src/examples/life_without_multi.clj
(defn my-print [ob]
(cond
(nil? ob) (.write *out* "nil")
(string? ob) (.write *out* ob)))
With the conditional in place, you can print nil with no trouble:
(my-println nil)
| nil
-> nil
Of course, there are still all kinds of types that my-println cannot deal with. If you try to print a vector, neither of the cond clauses will match, and the program will print nothing at all:

(my-println [1 2 3])
-> nil
By now you know the drill. Just add another cond clause for the vector case.
The implementation here is a little more complex, so you might want to separate the actual printing into a helper function, such as my-print-vector:

src/examples/life_without_multi.clj
(require '[clojure.string :as str])
(defn my-print-vector [ob]
(.write *out*"[")
(.write *out* (str/join " " ob))
(.write *out* "]"))
(defn my-print [ob]
(cond
(vector? ob) (my-print-vector ob)
(nil? ob) (.write *out* "nil")
(string? ob) (.write *out* ob)))
Make sure that you can now print a vector:

(my-println [1 2 3])
| [1 2 3]
-> nil
my-println now supports three types: strings, vectors, and nil. And you have a road map for new types: just add new clauses to the cond in my-println. But it is a crummy road map, because it conflates two things: the decision process for selecting an implementation and the specific implementation detail.
You can improve the situation somewhat by pulling out helper functions like my-print-vector. However, then you have to make two separate changes every time you want to a add new feature to my-println:
·        Create a new type-specific helper function.
·        Modify the existing my-println to add a new cond invoking the feature-specific helper.
 
What you really want is a way to add new features to the system by adding new code in a single place, without having to modify any existing code. Clojure offers this by way of protocols,

Saturday, April 21, 2012

Fixing On-Location Flash Photos in PhotshopCS5

Step One:
First, let’s look at the problem: Here’s a shot I took at sunset using an off-camera flash (the flash is up high and to the right of my camera position, aiming down at the subject and firing through a shootthrough umbrella). At this point in the shoot, I didn’t remember to add a CTO gel to warm the light, so the light from the flash is bright white (which looks really out of place in a beach sunset shot like this. The light should be warm, like the light from a setting sun, not a white flash).


Step Two:
To warm the light from the flash, go to the Adjustments panel and click on the Photo Filter icon (it’s the second icon from the right in the middle row). The Photo Filter controls will appear, and from the Filter pop-up menu, choose Orange (as seen here), then increase the Density to around 55%. So, how did I know 55% was right?
I opened a photo from a few minutes later in the shoot, when I had added a CTO gel to my flash, and matched the color and amount, but actually the amount doesn’t matter as much, because we’ll be able to lower it later if it’s too much. The whole image gets the Photo Filter, and it changes the color of the sky, and well…everything, but we just want to change the color of the light.


 Step Three:
What we need to do is hide the overall orange color, and then just apply it where we want it (where the light is actually falling on the subject). To do that, just press Command-I (PC: Ctrl-I) to Invert the layer mask attached to your Photo Filter adjustment layer, so your orange filter is hidden behind a black layer mask.
Now, get the Brush tool (B), press D to switch your Foreground color to white, and paint over your subject’s skin, hair, clothes, and anywhere the light from the flash is falling (as shown here). That way, the orange only affects where the light from the flash lands.




Step Four:
Remember in Step Two where I said I wasn’t worried about the amount because I could change it later? That’s now. Because we used an adjustment layer, we can just go to the Layers panel and lower the Opacity to lower the amount of orange (I lowered it to 64% here). If, instead of needing to lower the amount, you need more orange, then just double-click directly on the adjustment layer itself (in the Layers panel) and it reopens the Photo Filter controls in the Adjustments panel, so you can increase the Density amount. Here’s the final image, with the orange gel effect added in Photoshop.


The Fastest Way to Resize Brushes Ever (Plus, You Can Change Their Hardness, Too) in PhotoshopCS5

Step One:
When you have a Brush tool selected, just press-and-hold Option-Control (PC: Ctrl-Alt) and then click-and-drag (PC: Right-click-and-drag) to the right or left onscreen. A red brush preview will appear inside your cursor (as seen here)—drag right to increase the brush size preview or left to shrink the size. When you’re done, just release those keys and you’re set. Not only is this the fastest way to resize, it shows you more than just the round brush-size cursor—it includes the feathered edges of the brush, so you see the real size of what you’ll be painting with (see how the feathered edge extends beyond the usual round brush size cursor)?
 
TIP: Change Your Preview Color
If you want to change the color of your brush preview, go to Photoshop’s Preferences (Command-K [PC: Ctrl-K]), click on Cursors on the left, and in the Brush Preview section, click on the red Color swatch, which brings up a Color Picker where you can choose a new color.


Step Two:
To change the Hardness setting, you do almost the same thing—press-and-hold Option-Control (PC: Ctrl-Alt), but this time, click-and-drag (PC: Rightclick- and-drag) down to harden the edges, and up to make them softer (here I dragged down so far that it’s perfectly hard-edged now).
 
TIP: Turn on Open GL Drawing
If you don’t see the red brush preview, you’ll need check your preferences first. So, go to Photoshop’s preferences (Command-K [PC: Ctrl-K]), and click on Performance on the left side. In the GPU Settings section near the bottom right, turn on the Enable OpenGL Drawing checkbox, then restart Photoshop.

.


Fixing Dark Eye Sockets in PhotoshopCS5

Step One:
Here’s the image we’re going to work on, and if you look at her eyes, and the eye socket area surrounding them, you can see that they’re a bit dark. Brightening the whites of the eyes would help, but the area around them will still be kind of shadowy, so we may as well kill two birds with one stone, and fix both at the same time.
.

Step Two:
Go to the Layers panel and duplicate the Background layer (the quickest way is just to press Command-J [PC: Ctrl-J]). Now, change the blend mode of this duplicate layer from Normal to Screen
(as seen here). This makes the entire image much brighter.
.

Step Three:
We need to hide the brighter layer from view, so press-and-hold the Option (PC: Alt) key and click on the Add Layer Mask icon at the bottom of the Layers panel (it’s shown circled here in red). This hides your brighter Screen layer behind a black layer mask (as seen here). Now, switch to the Brush tool (B), choose a smallish, soft-edged brush, and paint a few strokes over the dark eye sockets and eyes (as shown here). Now, I know
at this point, it looks like she was out in the sun too long with a large pair of sunglasses on, but we’re going to fix that in the next step.


Step Four:
What brings this all together is lowering the Opacity of this layer, until the parts that you painted over and brightened in the previous step blend in with the rest of her face. This takes just a few seconds to match the two up, and it does an incredibly effective job. See how, when you lower the Opacity to around 35% (which works for this particular photo—each photo and skin tone will be different, so your opacity amount will be, too), it blends right in? Compare this image in Step Four with the one in Step One and you’ll see what I mean. If you’re doing a lot of photos, like high school senior portraits, or bridesmaids at a wedding, this method is much, much faster than fixing everyone’s eyes individually.




.

Getting Acquainted with Enterprise Linux


Enterprise Linux has four versions: Two of the versions are designed for workstation and desktop usage, and the other two versions are designed for server applications. Don’t get too bogged down trying to sort out the differences of these versions because the four versions of Enterprise Linux are really quite similar. In this chapter, I examine the different versions of Red Hat Enterprise Linux and what you can do with them. Before I go into the version descriptions, take a look at the history of Enterprise Linux.

Exploring the History of Enterprise Linux
Red Hat Enterprise Linux is one of many available distributions of Linux. Several companies make their own commercial Linux distributions, but in this book, I discuss the Enterprise Linux distribution by Red Hat. A Linux distribution is a complete version of the Linux operating system that contains the Linux kernel as well as other applications and programs that can be used for doing some type of work. The Linux kernel is the core of the Linux operating system and controls how the operating system functions with the hardware that makes up your PC. (Linux was originally developed by Linus Torvalds in 1991 while he was a college student in Finland.)
I don’t want to bore you with a lot of historical information about Enterprise Linux, but a little background information for a better understanding of the Linux kernel and version numbers is helpful. Exact dates aren’t important, so I’ll just give you the quick rundown of the history of Red Hat Linux and the introduction of Enterprise Linux.
The first publicly available version of Red Hat Linux appeared in the summer of 1994 and was based on kernel version 1.09. (The kernel is identified by a number that refers to the particular version of the kernel.) Since the release of the first version of the Red Hat Distribution, there have been many more releases, with each release improving upon the earlier versions. Red Hat made no distinction between its version’s suitability for home use or commercial (business) use of its distributions until May, 2002. By then, Red Hat was at release 7.3 of the Red Hat Linux distribution. Coinciding with the release of version 7.3 was the introduction of Red Hat Linux Advanced Server 2.1, which was renamed Enterprise Linux 2.1. Enterprise version 2.1 was based on the Red Hat 7.3 version but was intended for commercial/business use. The major difference between the commercial and home versions of Red Hat Linux was in the support offerings available for the versions. The home version, if purchased through a boxed set, gave the user a limited number of technical support calls for a short time period, and then the users were on their own. The commercial version provided a longer time period for technical support and offered additional technical support that could be purchased at additional cost. Also, Red Hat had issued a new version of its operating system about every six months — changing far too often for most commercial uses. With the release of Enterprise Linux 2.1, Red Hat slowed the pace of system changes to give users a more stable platform (thus requiring less frequent updates) and focused its commercial efforts on the Enterprise version.
From this point forward, Red Hat continued development of its home user versions through version 8 and finally version 9, which was the last Red Hat distribution that was available for home user purchase. In the summer of 2003, Red Hat decided that it would merge its open development process with the Fedora Linux project — and the Fedora Project was born.
In October, 2003, Red Hat introduced Enterprise 3 that, like its predecessor Enterprise 2.1, was specifically geared toward business/enterprise users. Enterprise 3 was initially available in three versions — AS, ES, and WS — each designed for specific types of service. In the summer of 2004, Red Hat added another version of Enterprise 3 specifically for the desktop. That brings us to the present — Enterprise version 4 — which is the focus of this book.

Examining the Versions of Red Hat Enterprise
All versions of Enterprise Linux share some similarities in their product features. The most significant of these features are
·        A 12–18 month release cycle
·        A common operating system, applications, and management tools
·        One year of support and updates using the Red Hat Network included with the initial purchase, which is then renewable annually for 5 years for an additional yearly free
Having a 12–18 month release cycle makes the update process more predictable because a user knows that he won’t have to make any major changes to his system configuration for at least a year and perhaps longer. With all versions are based on the same operating system, a system administrator can more easily configure and maintain consistency because the same skill set is used for all versions.
Probably the most significant feature of Enterprise Linux is the level(s) of support available from Red Hat. One of the most frequently heard criticisms of Linux is the lack of user support typically available. With Enterprise 3, and Enterprise version 4 covered in this book, Red Hat has seriously addressed the support issue.
In the following sections, I examine the different versions of Enterprise Linux 4. (For installation details, see Appendix A.) Then I conclude the chapter the remainder of this chapter with what Enterprise Linux can do for you.

Red Hat Enterprise AS
Red Hat Enterprise AS is the top-of-the-line server operating system available from Red Hat. Enterprise AS is designed for large departments or company data centers. The AS version provides the same server functions as the ES version but is best suited for servers that have more than two CPUs with greater than 8GB of system RAM. In addition to support for more than two CPUs in the same system, there is support for many different types of CPUs as well, such as the IBM iSeries, pSeries, and zSeries.
The greatest difference between the AS and ES (see the following section) versions is the level of support available with the AS version. Users can purchase the premium level support option that provides 24/7 support with a guaranteed one-hour response time.

Red Hat Enterprise ES
Red Hat Enterprise ES is intended to provide for an entry-level or midrange server environment with support for up to two CPUs and 8GB of system RAM.
The ES version is quite similar to the AS version (see the preceding section) but is meant for smaller-scale operations and does not provide the same level of support as the AS version. The ES version includes the following applications:
·        Web server
·        Network services (DNS [Domain Name System], DHCP [Dynamic Host Configuration Protocol], firewall security, and more)
·        File/print/mail servers
·        SQL (Structured Query Language) databases

Red Hat Enterprise WS
Red Hat Enterprise WS provides nearly the same functionality as the Desktop version. Included with WS are the same Web browser, office suite, and e-mail client (Firefox, OpenOffice.org 1.1, and Evolution, respectively). The major difference between the WS and Desktop (see the following section) versions is the number of CPUs supported. The WS version supports up to two CPUs, but the Desktop version supports only one.

Red Hat Desktop
According to Red Hat, Enterprise 4 Desktop is “a high-quality, full-featured client system for use in a wide range of desktop deployments where security and manageability are key.” What does this mean to the typical user?
This version focuses on the desktop, containing applications that are used on the desktop. Red Hat Desktop includes a mail client program, similar to MS Outlook, called Evolution. Also included is the Firefox Web browser; a complete office suite, OpenOffice.org 1.1; and GAIM, which is an instant messaging client.
To find out more about some of the applications available in Enterprise Linux.
Third-party productivity applications are also installed by default during the system installation. This is an improvement over earlier versions of Red Hat Linux. Adobe Acrobat Reader, a Macromedia Flash plug-in, RealPlayer, and Java are just a few of the applications that work in Red Hat Desktop right out of the box.
As part of the Enterprise family of programs, Red Hat Desktop shares many of the features and tools of the other Enterprise versions. A user or administrator who is familiar with one of the versions of Enterprise 4 will be able to easily use a different version. Red Hat Desktop supports a system with one CPU and up to 4GB of system RAM.

Putting Enterprise Linux to Work
Whether you’re planning to use the AS or ES server versions of Enterprise Linux or you’ll be using the WS or Desktop versions, the choices of productivity software and what you can do with them are nearly infinite. You can use Enterprise Linux to manage all your system hardware, do system administration, create networks for sharing data, browse the Internet, serve Web pages, and much more. Take a look at just some of the tasks that you can do with Enterprise Linux.

Configuring your local network
All versions of Enterprise Linux include the X Window System (find more on this in Chapter 5), based on XFree86, which provides the foundation for a graphical user interface (GUI). However, you aren’t stuck with just one GUI because Enterprise Linux supplies two well-known GUIs: KDE and GNOME.
·        KDE: The K Desktop Environment is an optional GUI that can be selected
at installation time.
·        GNOME: This is the default GUI that’s installed when the operating
system is installed.
If you have both GUIs installed, a tool on either desktop makes switching between the desktops very easy.
You don’t have to spend additional money to buy typical productivity applications such as word processing or spreadsheet programs. All versions of Enterprise Linux ship with a complete office productivity suite — OpenOffice. org — as well as many other graphical applications that can be used for editing graphics, building Web sites, and much more.
With either desktop, you can use the included graphical-based tools to configure and maintain your systems. You can also configure the hardware in your system and add or remove devices. Additionally, you can configure printers to work with your local network.
Enterprise Linux includes support for many types of printers from different manufacturers. You can configure a printer connected directly to your system as well as many types of network-connected printers.
Enterprise Linux gives you everything you need to set up a local network so that your systems can share data with each other. For example, you can configure the AS and ES versions to provide local network services, such as Network File System (NFS), that shares files between the servers and WS and Desktop clients. Or, you can configure the Network Information System (NIS) to give your users the ability to log in to
the network and use all the network resources.
You will also be able to share data with computers running other operating systems, such as MS Windows, Novell NetWare, or Mac OS X. Enterprise Linux gives you all the tools that you need to configure your system to communicate with these other operating systems and exchange information.

Using Enterprise Linux to maintain your system
Keeping your systems running properly and updated with the latest patches can be a daunting proposition. Don’t worry, though, because Enterprise Linux gives you all the tools that you need to perform these tasks. All versions of Enterprise Linux include a subscription to the Red Hat Network as well as the up-date application that constantly scans your system configuration and installed packages looking for packages that can be updated.
Tools are available in all versions that you can use to create and remove system users and groups. You use these same tools to change properties and permissions for your users and groups as well. Several applications are available for creating file archives for backing up your data. You can compress your data to maximize your storage space and speed up your backup and restore process.
Installing application software in Enterprise Linux is a relatively easy process because most applications are available in the Red Hat Package Manager (RPM) format. You can use the graphical-based RPM tool to install your application, or you can use the rpm command from a command prompt. In many instances, you can either choose to use the graphical based tool or you can use the command line to enter your commands.

Securing your system
Anyone who uses a computer these days is well aware of the increasing problems caused by unsecured systems. Enterprise Linux includes many of the tools that you need to secure your system from malicious attacks.
You can configure a firewall on your system by making a few choices and answering a few questions from the graphical-based firewall tool. If you want to go into more detail with your firewall configuration, you can use the command line firewall tool to create more complex firewall rules. You can protect your systems from internal attacks (attacks that originate inside your organization) as well as external (outside) attacks.
Applications are also available that you can use to actively detect system intrusions. You can configure how your system should respond to intrusions and what actions should be taken to ensure that your systems are not vulnerable to future attacks.

Providing Internet services
You can use Enterprise Linux to serve information across the Internet to users on different networks than your own. The ES and AS versions of Enterprise Linux include the following Internet servers:
 
·        Apache httpd Web server: The Apache Web server is the most widely
used Web server in use today. (See Chapter 15.)
·        FTP server: The vsftpd server is an implementation of the File Transfer
Protocol (FTP) that is used for transferring files across the Internet. (See
Chapter 14.)
·        sendmail: This is the most widely used mail transport agent in use
today. 
You can remotely log in to another computer on your own network or even on the Internet. Using the telnet program, or another more secure program called ssh, makes remote logins easy. After logging in remotely, you can control the remote computer as though you were sitting in front of it.
In Enterprise Linux, all Internet servers are based on the Transmission Control Protocol/Internet Protocol (TCP/IP), which is the protocol on which the Internet is based. Any network applications that use TCP/IP are supported natively by Enterprise Linux. .
As you can see from this quick examination of the features of Enterprise Linux, you can do a lot with it. In fact, anything you can do with the most widely used operating system (MS Windows), you can do as well or better with Enterprise Linux. You systems will certainly be more secure and less vulnerable to attack if you are running Enterprise Linux. The remaining chapters of this book explain in more detail the features briefly discussed in this chapter.

Comparing Enterprise Linux and Fedora Core

In Fall, 2003, Red Hat announced that it would no zonger sell nor support its retail box version of Red Hat Linux. Version 9 would be the last of many versions that I’ve seen over the years.
Instead of continuing this long line of versions, Red Hat announced that it would provide support to the Fedora Project for development of what Red Hat described as a place for testing cuttingedge technology. What this means is that all development efforts for all Red Hat software would go into the Fedora Project and the Fedora software, which is known as Fedora Core. New releases of Fedora Core will occur about every six months, which is far too often for productionbased systems, but allows for testing of features that would appear at some later date in the Enterprise versions. At the same time as the Fedora Project announcement, Red Hat placed nearly all its efforts into promoting its Enterprise Linux product and its features and benefits.
Many people were very confused by this move by Red Hat, and many users had a strong feeling that Red Hat Linux would no longer be available.
This is simply not true. What was known as Red Hat Linux is simply now called Fedora Project.
In my opinion, except for the name change and not being able to purchase a retail box version of Fedora, nothing has really changed as far as the features and functionality of the operating system.
The major advantages of Enterprise Linux over Fedora Core are the number of support options that are available from Red Hat. For many years, one of the biggest reasons given by the corporate world for not using Linux has been a lack of user support. With the promotion of Enterprise Linux, Red Hat has effectively removed lack of support as a reason for a company not to consider using Linux.
Another key feature of Enterprise Linux is the extended development and release cycle for new versions. Red Hat has stated that it plans to release new versions of Enterprise Linux every 12–18 months rather than every 6 months, as had been the case with Red Hat Linux.
However, probably the most significant difference between Fedora Core and Enterprise Linux is the difference in price. Purchasing the AS version of Enterprise Linux with the standard support option cost about $1,500, with the premium support package costing about $2,500. Fedora Core, on the other hand, is free.
What does all this mean to the users of Enterprise Linux or Fedora? Can you use Fedora Core to provide the same services and functionality as Enterprise Linux? The answer is a resounding yes. Users can do everything in Fedora that they can do with Enterprise Linux. This is good news to users of Enterprise Linux as well. Any user who is familiar with Fedora Core can easily make the move to Enterprise Linux because they are nearly identical in features and functionality.