Broken Clamp

Sometimes things happen. Fortunately, there were no injuries. I was using this lump of crud to clamp some wood pieces while the glue cured. Or rather, I was trying to do so. The tension I had on the clamp was modest; certainly didn’t have bear down on the screw. Oh well, cheap tools. This is a 4 inch C-clamp. Note that the pivot separated too.

There isn’t any sign of currosion or other pre-existing fracture. The crystaline structure in the metal is quite variable, which implies the casting temperature wa poorly managed.

Coffee: Design of the Experiments (Part 2: RSO)

In Review

The results of my 4-trial fractional factorial experiment (main effects) showed that extraction time is the most important variable, and that temperature and C/W ratio are less important. In the interest of completeness this experiment will not simply vary extraction time, but rather seek to efficiently find the maximum.

Lessons from the Screening Experiment

I am accustomed to making my coffee with about 22 fluid ounces of water, which makes two nice cups for my morning. During the screening I used 22 fl oz as a baseline, and increased or decreased the amount of coffee accordingly. The result was that some experiments required 48.8 g of coffee. You can’t possibly imagine how much coffee that is. It almost completely fills a coffee cup with just the beans!

In subsequent experiments I will limit the amount of coffee beans to something more modest, perhaps about 20 g, and vary the amount of water. With this more cautious approach I may live to see the final results.

The Design

NIST offered three designs for response surface objective (RSO) experiments. Specifically there are two interpretations of the Box-Wilson central composite design, each of which requires 20 total runs. An alternative design reduces the number of experiments to 15, which is the reason I chose it.

Specifically, the Box-Behnken design provides an RSO approach for 3 factors in 15 experiments. That is the same number I originally suggested through my naïve approach, but without all the baggage. Effectively, my original 5 values for each factor get reduced to three, and those are shuffled about in a way that is well-suited to estimating their effects. In my case, the specific experiments would be:

Response Surface Objective

Actual Values

Statistics Jargon

Trial

r (g/ml)

t (sec)

T (F)

R

T

T

1

0.035

10

195

-1

-1

0

2

0.075

10

195

+1

-1

0

3

0.035

300

195

-1

+1

0

4

0.075

300

195

+1

+1

0

5

0.035

55

185

-1

0

-1

6

0.075

55

185

+1

0

-1

7

0.035

55

205

-1

0

+1

8

0.075

55

205

+1

0

+1

9

0.055

10

185

0

-1

-1

10

0.055

300

185

0

+1

-1

11

0.055

10

205

0

-1

+1

12

0.055

300

205

0

+1

+1

13, 14, 15

0.055

55

195

0

0

0

This set will take me about three weeks. With luck by the time you read this I will have trickled out previous articles, and you will have results in two weeks or less. Sorry for the suspense.

Oh yes, I’m interested in other people’s Q functions too, so if you want to run my exact experiments, please post your data and I will post your results. Public or anonymous, at your discretion.

On Food Storage

I post this at the risk seeming paranoid, I’m not. I care deeply about my family’s welfare. Prudence, and my Boy Scout training, compel me to be prepared. Although I’m a food storage novice, I have asked questions not answered in books. My food storage series of posts is about spreading what I learned that I couldn’t find in books. I rather like practicing preparedness—cooking with the Dutch oven in the back yard is a little like camping. And I’ve made some great bread.

You can’t prepare for everything; preparation costs time and money. A good candidate emergency for preparedness is one which

  • Has substantially bad consequence
  • Is likely to occur
  • Can be prepared for

The first two bullets are the “consequence” and “likelihood” attributes of risk. Some examples of emergencies that seem ripe to me include automobile breakdown, power failure during bad weather, or job loss. Others might include material loss to theft, fire, or severe weather, or illness. We trust our insurance companies to help us in many kinds of disasters. For job loss the only real insurance is money in the bank, though it is wise to retain or enhance job skills, and keep debt modest to preserve mobility.

I am trying to establish a food storage program. Food storage helps provide insurance against scarcity, sharp rises in prices, and at some level against short-term problems like inclement weather or job losses. The exact duration to plan for depends on your storage space, and fear—I’m starting with one month, and will probably increase to three months over time. More than that will be difficult to store in my present housing, and difficult to manage in any case.

The bromide in the food storage community is “store what you eat and eat what you store.” “Eat what you store” means to eat stored food while replacing it with new food, to rotate. Of course, adherence to “store what you eat” makes sense if you plan to rotate things into use. It is not without flaws, however. Foods with long shelf lives are roundly advised against by every health association on the planet. Consider:

  • Shortening, a partially hydrogenated fat, will last much longer than canola oil. Too bad it also (tends to) contain trans fats.
  • White flour will store longer than whole-wheat flour.
  • Brown rice spoils faster than white rice.

This isn’t important unless you “store what you eat and eat what you store”. Making relatively bad-for-you foods a regular part of your diet simply to support your food storage plan seems like a poor choice.

Using a 3-month storage plan avoids these problems. You can store three months of foods that aren’t terrible for you, and you can therefore eat what you store.

The Nature of an Emergency

The most benign reason to tap into the storage is short-term economic hardship, such as a lay-off or temporary unpaid leave due to, for example, a sick grandmother. This is a very benign emergency; the water and power are on, refrigeration and heat work, and fresh items (eggs, milk, vegetables) may still be purchased.

A harder emergency might be long inclement weather, or regular power problems. You might have power or gas to cook with, but refrigeration would be dicey. Presumably grocery stores would not be reliable in these circumstances either. Convenience fuels, like propane, kerosene, charcoal, and Coleman fuel would all be in demand.

Consider using storage in the second, more severe, scenario. No fresh milk for cereal. You can use powdered milk (but do you eat that regularly)? No eggs, or at best dried eggs (do you eat those regularly?). No lunch meat or cheese for sandwiches. You did bake bread, right?

Living off food storage in those circumstances would be an appalling amount of work. Unmitigated toil to cook three full meals a day with no refrigerated leftovers and without lots of “cereal bar”-type products.

Some things would just be difficult to cook. All your casserole recipes will be useless if the oven in your range is controlled by an electric thermostat…and the power is out. Practically, you may be cooking on a camp stove in the garage, or on the patio using the barbecue. No meat though, the freezer is out too.

Those specialty foods, like powdered milk and powdered eggs are not a part of our regular diet. Getting stock rotation on those products would be challenging. My next food storage post will cover the requirements for a successful food storage plan.

Coffee: The Detestitron

Please, when you are shopping for a grinder, don’t believe anything you hear (except what I tell you…). A blade grinder is probably as good a grinder as you want to afford. Don’t presume that a burr grinder will automatically produce a more consistent grind. Take, for example, my manual burr machine. Compare the grind consistencies from my cheapo blade grinder with my burr grinder in the following picture. Neither is “consistent”, and the variation from the burr grinder goes from about 4 mm to dust. The blade grinder produces 1 mm to dust. Neither appears to produce a dominant particle.

You may want to cover your children’s eyes; even a brief look at this grinder could give them nightmares.

Detestitron is a heavily modified portable burr grinder. It was about $20, but it uses conical ceramic burrs and is quite ingeniously designed. The modifications I made to the original grinder maintain concentricity between the inner and outer cones, and facilitate holding the cursed thing to the table while turning the crank. The grind consistency, for all that, did not actually improve noticeably.

The Detestitron’s grind axle is centered in ball bearings. The original shaft was extended with a small brass rod, brazed on. The bottom bearing is fixed in an aluminum bar, which is in turn screwed to the chassis. Note that the center cone is not perfectly round, and so this keeps the grinder very stable—but it is not really centered.

The burrs are basically the same as you would see on a pepper mill, but much larger. There is a screw-like mechanism to force the beans into the narrow grinding slot.

Detestitron’s fundamental problem is that the length of the surfaces over which there is a consistent separation is very short—the coarser the grind, the shorter the overlapping length. I believe that any grinder with similar burrs would produce similar results. There are other designs, but they seem to start at about $200. Did I mention donations are welcome?

Coffee: Analysis & Results of the Main Effects Study

I have conducted the trials in the following table.

Actual Values

Statistics Jargon

Trial

r (g/ml)

t (sec)

T (F)

r

t

T

Q

1

0.035

10

205

-1

-1

+1

2.5

2

0.075

10

185

+1

-1

-1

3

3

0.035

300

185

-1

+1

-1

4

4

0.075

300

205

+1

+1

+1

4.1

This study is a 2-factor partial factorial design, indicated as 23-1. The good people who write for Wikipedia tell me that this experiment design gives “main effects” but those may be confounded with two-factor interactions. Indeed, there is some reason to be concerned for the simple quality function evaluation in this situation—for example, I worry that row 1 and row 2 could easily produce similar effects. C/W is lower in 1, but the temperature is higher, so extraction should be similar to a higher ratio with lower temperature. In any case, if the result is not revealing, the experiment may be augmented with the remaining 4 cases in the factorial design.

The main effects plot shows how each parameter influences the result, though recognize that interactions may be masking main effects. The main effects plot for this experiment is shown below. From this it would appear that extraction time is the most sensitive parameter—to me a surprising observation. I had certainly expected the coffee/water ratio (cwrat) to be the dominant variable, but here it does not appear to be so. The negative correlation with temperature is very surprising, and indeed I am skeptical. Still, the variation appears to be quite small, and is likely due to two-factor interactions.

I would be inclined to fix the temperature and the C/W ratio at their midpoints, and vary the time alone; however, I believe the response hypersurface is more complex. I shall press forward with the response surface methodology on all three variables.

In case you want to see my data.

Trial Order Q Date Comment
1 2.5 2/16/09 Dishwatery taste and aroma. Very simple palate with almost no mouth feel. Equivalent to mediocre drip.
2 3 2/17/09 Not very aromatic and too simple. Has nice bitterness and some acidity but not well balanced between them.
3 4 2/18/09 Mildly aromatic. Good mouth feel with acid/bitter well balanced. Slight char flavor.
4 4.1 2/19/09 Too bitter. The aroma is marvelous being very rich and tantalizing. Mouth feel is creamy. If bitterness where attenuated this would be sublime.

I wasn’t going to use fractions, but I decided I needed them. Note that trial 4 was assigned a quality of 4.1, which is really a note saying that this was a little bit better than trial 3, but not much.

Coffee Design of Experiments (Part 1: Screening)

I will invoke a variety of principals from experiment design methods. The truth is, though, that I’m self-taught in experiment design. It is altogether probable that there are better ways of conducting this experiment which my ignorance has hidden from me. I’m open to suggestions!

Quick Review

The variables in control are temperature t, extraction time T, and coffee-to-water ratio (by weight) r. Each of these will be considered over a domain of 5 values. Temperature and coffee-to-water ratio are linearly divided into 5 steps between their minimum and maximum values, which I selected based on published coffee brewing advice. Extraction time is divided logarithmically over its domain, since my intuition is that extraction is an approximately logarithmic process in time.

Linear Uncorrelated Approach

I might assume that t, T, and r are uncorrelated and independent. Then, without assuming that Q is linear in t, T, or r, I could easily conduct the following experiments (if the mathese bothers you, skip ahead to the text):

  • T = 195 F, t = 55 seconds, r =
    (0.035, 0.045, 0.055, 0.065, 0.075) g/ml
  • T = 195 F, t = (10, 23, 55, 128, 300) seconds, r = rmax = argmaxr
    Q(r, T = 195, t = 55)
  • T = (185, 190, 195, 200, 205) F, t = tmax = argmaxt Q(r = rmax, T = 195, t)

In English, first find the best point by finding the optimum coffee-to-water ratio for a fixed extraction period and temperature. My experience suggests that the ratio is likely to be one of the more sensitive parameters, so sweeping this first should provide good insight. Then, using that optimum C/W ratio, sweep the extraction time to find the best duration. Finally, sweep the temperature.

Really, this is a sort of bastardized gradient search. It doesn’t account for correlation among parameters. For example, you might imagine that a very high C/W ratio combined with a short extraction time would produce a very different flavor than the same high C/W ratio and a long extraction time. In fact, one might be delicious and the other dreadful. The proposed methodology, however, would not reveal that condition.

Another problem is that it really isn’t safe to assume only 15 trials would be required. The quality function (my enjoyment) is likely to be quite subjective and quite variable—I’ll assume for argument that it is also Gaussian. Q can take integer values from 1 to 5, and it is quite reasonable to assume that my variability has a standard deviation is 1.5 or so. To know the actual quality accurate to a single value of Q with 95% certainty requires the standard deviation to be about 0.25. We can reduce the standard deviation of the estimated Q by averaging multiple experiments. How many? (1.5/0.25)2 = 25. Now the original 15 trials have turned into 15×25 = 375, and at 1 experiment per day pushed the answer a year into the future.

A savvy person might ask if there is a way to find an equally good answer with fewer experiments or if there is a better way to arrange those 375 experiments to get more broadly useful answers.

NIST provides a sublime search tree for selecting an experiment design, and it is clear that what I’ve outlined is best served by a response surface objective (RSO) on 3 factors, or possibly a main effects design first, followed by a response surface objective on 2 or 3 factors, or even a simple 1-factor search.

While I expect to consider the RSO approach in terms of number of experiments, and then examine a screen+RSO approach to see if there is a design which offers possibly reduced number of experiments. The Box-Behnken design for RSO in 3-factors requires a paltry 15 experiments. Still, that is three weeks away, and I would prefer to have some data to work with sooner.

The screening test is much smaller, and perhaps worthwhile in that the data can be included in the RSO experiment, even though not strictly required. For my 3-factor system the Level III screening design requires a modest 4 runs. This would be small enough that I could even do two trials of each, which would be of great help in reducing my variability. The experiments are listed in the following table.

Screening Design

Actual Values

Statistics Jargon

Trial

r (g/ml)

t (sec)

T (F)

r

t

T

1

0.035

10

205

-1

-1

+1

2

0.075

10

185

+1

-1

-1

3

0.035

300

185

-1

+1

-1

4

0.075

300

205

+1

+1

+1

If I Had a Grinder

The other major effect that I would like to measure is grind size, predicated on substantially uniform product. My grinder doesn’t do it. Consider, though, the 4-factor version of this same exercise. The four-factor RSO requires 33, 46, or 52 experiments, depending on the method. The four-factor screen requires 8 experiments. Even if the screening doesn’t reduce the number of factors substantially, the experiment is tractable at 41 trials. Of course all this assumes Q has tolerable variance.

If you donate the grinder, I’ll do the experiments…

Navigating the Seas of Coffee

I love coffee. To approximate Mark Twain, quitting is easy, I’ve done it hundreds of times. I feel better, overall, when I don’t drink coffee regularly. But I love coffee. There is so much variation—the most consistent brew is still different from drink to drink. Coffee is marvelously complex; it tastes different at different temperatures, or at different times of day.

With all this variety, you might expect that some cups are better than others. To an engineer (ahem) this implies that there is an optimum. I’m fairly sure the best cup of coffee is stored next to the Golden Fleece. But maybe, just maybe, there is a very good cup of coffee I can make. In an upcoming series of postings I will write about my ongoing effort to brew, and define, the perfect cup. The search continues.

Brewing Methods

There are three classes of coffee brewing methodology

  1. Hot diffusion extraction, including drip, French press, percolator, and vacuum
  2. Cold diffusion extraction, also called simply cold extraction
  3. Forced extraction, in particular espresso

I love espresso, but I’m not interested in spending the money required to brew my own. The word expresso is a linguistic abomination, as loathsome and vile as “12 items or less”.

Hot and cold extractions are both accessible to me, and each offers a wealth of experimental opportunity. I’ll begin my discussion with hot extraction.

Hot Diffusion Extraction

A good cup of drip coffee is hard to beat. I love Starbucks’ drip because it is very rich, very aromatic, and very complex. I know that many loathe Starbucks, and in the name of political correctness, I’ll be the first to state that these horrible people are wrong. Terribly wrong. Even so, it is safest to disagree from a distance, rather than risk offending a disenchanted ex-Starbucks drinker and his certainty that “multinational corporations” are secretly making his life suck while he types on his or her pretty little Mac.

I assert that there are five principal variables in the diffusion extraction process:

  • Coffee-to-water ratio (r)
  • The source and roast of the beans (b)
  • Distribution of coffee bean particle sizes (F(d)). Fineness at first glance, grind uniformity upon consideration.
  • Period of time in which the coffee beans are in contact with the water (t)
  • Temperature of the water (T)

One might suggest that some specific set of these variables would produce a truly excellent, repeatable, cup of coffee. If Q is the quality function, then we seek the r, b, F(d), t, and T such that we (nearly) maximize Q. So, then, this is science. In practice there are other variables, not least of which are the temperature at the time of drinking, the source of the water, and the vessel from which the coffee is consumed. I’m ignoring those. Let us explore the support of each of the chosen parameters.

Coffee-to-Water Ratio r

I have never been able to measure coffee repeatably using the oft-advised “tablespoons”. I find the grain size of whole beans to be too close to the size of the spoon—leveling is impossible. Picture (below left) is a level tablespoon, and (below right) is a very different, but also reasonable interpretation of tablespoon.

Starbucks provides an interpretation of a tablespoon—namely 5 g. For reference, they are recommending a coffee/water (C/W) ratio of 0.055 g/ml.

I have resorted to weighing for the beans, to provide consistency. Volume is acceptable for water, since that is not subject to nearly the same variability.

A reasonable starting range for coffee-to-water ratio might be in the neighborhood of 0.047 g/ml to 0.063 g/ml for drip (according to Black Bear). Allowing some margin for exploration, I assert that we should explore coffee to water ratios between 0.035 and 0.075 g/ml—which puts Starbucks 0.055 g/ml right in the middle.

r in (0.035 g/ml, 0.075 g/ml)

Source and Roast of Beans b

The choice of beans is important, I think. Perhaps the roast is even more important. Starbucks appears to prefer to burn their beans, and in fact a cursory glance at Trader Joe’s coffee shelf suggests that most beans sold are darkly roasted. Watch the shelves, you probably won’t see more than one choice that in “light roast” and maybe only one in “medium roast”, the rest will be French roast, city roast, or dark roast. As far as I can tell the last three are all equivalent.

Nevermind, I will eventually test several varieties. Because I don’t drink that much coffee, the bean for early experiments must be consistently available. Because I’m cheap, it must also be affordable. Not all the way to Folgers, but there is no chance I’ll regularly buy Starbucks beans. My main choice, for these experiments, is Costco’s house brand.

Particle Size Distribution F(d)

Coffee grind must be one of the most ill-described nonsensical terms in the wide field of cooking. Particle size ranges from coarse grind (perc) to a nearly molecular dust (Turkish). And “drip” means something, but nobody quantifies it. In my estimation, about half the grinder reviews I can find on the net base their rating on the noise, appearance, or the speed with which the shippers delivered their grinder(!?). I have yet to find a single review by any organization or individual which examines the grind in any repeatable way. I’d do it myself, but I lack two things: a graduated sieve set and a plausibly decent grinder. Donations welcome.

This is, by far, the weakest controlled variable in my process. I can only hope that the alarmingly wide range of particle sizes I get makes the rest of the processes insensitive to the grind. Note that I have two grinders, a whirly-blade grinder and a heavily modified lump-o’-junk burr grinder based on a portable which I reviewed on Amazon, though the product appears not to be sold anymore.

The whirly blade grinder produces grind distribution something like the following.

Extraction Period t

This is very easy to control…sort of. In a typical drip system the extraction period is equal to the time it takes to run the water through. This means that the more coffee you are brewing, the longer the extraction time. I didn’t want that to be a limit, so for the purposes of experiment I use a French press. With a French press the extraction time is easily controlled with a countdown timer—depress the plunger when the timer dings. I presume my cholesterol is suffering accordingly.

The admissible range of times could reasonably go from zero to approximately 5 minutes. By zero, I mean about five seconds; pour the water in the press pot, then depress the plunger.

t = (0, 5 minutes)

Temperature T

I have read recommendations for temperature ranging from about 195 F to 205 F. I live at approximately 1 mile altitude, and therefore 205 F is the hottest I can get outside the pressure cooker. Since I’ve never seen a justification for the indicated temperatures, I provide a little margin.

T = (185 F, ~205 F)

The Quality Function

My objective is to optimize the quality, or enjoyment, of the cup of coffee. This is a highly subjective measure, and proves difficult even in that context. My tasting methodology is to spend a while smelling the hot (but not undrinkably hot) cup of coffee. I try to smell different elements. Then I take a large sip, and swallow it. I know professional tasters spit, but I’m interested only in optimizations that reflect the way I drink coffee. Practically, I find that swallowing allows me to better characterize the bitterness and the mouth feel of the beverage.

In addition to being subjective, a taster is variable. I could, in theory, drink the same cup of coffee on three different days and get three different answers. This will prove to be the source of the greatest uncertainty in my estimation problem.

I briefly toyed with the idea of expressing a vector quality function, comprised of elements like “mouth feel”, “aroma”, “flavor”, and “strength”. However, I decided that using those attributes would bias my assessment, since I might be carrying an unconscious bias that weak coffee is bad coffee. My rating scale is 1 to 5, where 1 is so bad I might hesitate before having another cup (I’d feel gouged at 30 cents), 3 is a passable daily cup, but I’d feel gouged at more than one dollar, and 5 is very good, where I’d feel I got a deal at $2.50.

Experimental Space

Suppose all our parameters vary over their ranges fairly coarsely:

  • Ratio r in (0.035, 0.045, 0.055, 0.065, 0.075) g/ml
  • Beans b in (Costco)
  • Particle size distribution F in whatever my lousy equipment provides
  • Extraction time t in (10, 23, 55, 128, 300) seconds—log spaced
  • Temperature T in (185, 190, 195, 200, 205) Fahrenheit

To do even a single experiment for each combination would require 5 × 5 × 5 = 125 experiments. Assuming a pot a day, this is almost half a year from a useful result. This radical shortcoming leads to the next topic: Design of Experiments.

Canon MX850 with Canon Paper

I revisited printing with the Canon MX850 this weekend. I printed a 4-image collage on a sheet of 8.5×11 inch Canon Glossy Photo paper. I had the printer settings for optimization and enhancement disabled, since this gave the most accurate printing in previous trials. The results are shown below. Note that my scanner pooped out about 1/3 of the way through the picture—I presume not permanently—so the images aren’t exactly the same scale and crop.

There is, maybe, a slight reddish shift in the left frame, but it is quite good overall. This print will go on display at the office. For album use, the resolution of the professional print services is better, but for most uses the printer is quite good. For anything hanging on a wall, this is great.

Vibrance

Photoshop’s Camera Raw tool, for importing RAW camera images, provides a really delightful little slider called “vibrance”. Paint Shop Pro does not. I read one post suggesting that vibrance was effectively increasing saturation on low-saturated areas. I decided this would be worth trying to emulate in Paint Shop Pro with some layer magic. The results are shown in the comparison image below. My previous posts show how lifeless this image is right out of the camera. In the comparison, the only difference between before and after is the application of pseudo-vibrance. Skin tones are pretty good, given how much increase in saturation is visible in the toy.

To do this in PSP, open your RAW (or other file). Then split channels to hue, saturation, and lightness, or HSL. Note that PSP can only do an HSL split on images with 8 bits/channel, so it will prompt you to reduce the color depth. Let it, but as soon as it has generated the H, S, L images go back to your original and “undo”—this will restore the original color depth.

After splitting to channels your screen should look a little like the following. The original is in the subwindow DSC_0006 and the channels are in Hue3, Lightness3, and Saturation3. You can close the Hue and Lightness window—we will only be using the Saturation window.

Return focus to the original image. Add a new adjustment layer, with type Hue/Saturation/Lightness. Crank up the saturation to 50 or so—well past the crazy level that you would never, ever, use in a real photo. The effect will be attenuated by adding a mask.

The layers palette will now show the background, and on top of that a Hue/Saturation/Lightness layer.

Change window back to the Saturation subwindow, select all, and copy. Jump back to the original window. Select the adjustment layer by clicking on it in the layers palette. Then “Paste Into Selection”, which will embed a mask into the adjustment layer.

You can now see a small black and white image of the saturation channel in the layer thumbnail.

That’s it. Adjust the saturation until it looks good.

Benefits and Drawbacks

Compared to Photoshop, this process is definitely a hassle. I haven’t determined if it can be scripted, but I’m certain there are keyboard shortcuts to speed the process along. There are benefits, though. In Photoshop Elements, once you set the vibrance in the import utility you’re done. If you decide later that you want to tweak the vibrance you have to go back to square one. In PSP you retain the adjustability, at the full 16 bits/channel, and you can layer in other adjustments, like levels. Furthermore, you can change how the effect is applied by adjusting the mask. For example, you can apply curves to the mask layer to change which portions of the picture are affected. I think that kind of adjustment would be fiddly, though.

Final note: I may actually be turning up the saturation on more saturated sections, rather than on less. The effect seems to work however. I did try using the inverse mask, and that may result in pleasing images, though in this case the image looked flatter, not more colorful.

Subscribing to this blog

I use RSS to subscribe to my friends’ blogs, and to feeds from certain authors I enjoy. Subscribe, in this case, means that I go one place to see if anything new has appeared on myriad sources. The software or service that assembles all your subscriptions in one place is called an aggregator. I use iGoogle, both as my home page and as my aggregator. The following screen capture shows part of my home page. There are six widgets—each has its own blue title bar. Weather, Google Calendar, and Google Docs are all non-RSS widgets. The other three include two from Ambrose Evans-Pritchard at the Telegraph, and one which is the feed from this blog. I have concealed my friends’ blogs in the interest of their privacy.

To subscribe to this blog you may be able to simply click the little RSS icon in your browser’s title bar, shown in the next graphic. You are using Firefox, right?

This worked for one friend of mine, but did not work for me. It was, however, easy to do through Google. I clicked “Add Stuff” on my iGoogle page, and then clicked “Add feed or gadget”, and in the box I entered the feed URL for this page

http://inkofpark.wordpress.com/feed/

Visit the link above and you’ll be greeted by something like the following graphic. Perhaps that will help.

You might find the following button works for Google too

Add to Google