Food Storage and Refried Beans

A solid storage plan includes food, and also anything else you need to make the stored food into something edible. The difficulty is that none of the food storage guides or recipe books I have list how much fuel to store in order to make their recipes.

To rectify this, I have begun measuring fuel consumption when preparing recipes roughly as I would in no-gas and no-power emergency. Perhaps others will find this useful, and quantify their recipes.

The first step is to reduce the total fuel as much as possible, and the first target food is beans. Beans are a nice target because I have a lot of them—something like 15 lbs of kidney beans that are probably older than me. Furthermore, they offer excellent nutrition, and finally they cook for ages. The two techniques I know to reduce fuel use are pressure cooking and pre-soaking.

The test recipe today was for refried beans. The result was more delicious than any canned refries I have ever eaten, though the texture was much lumpier. The recipe is below. I cooked on my Coleman dual-fuel stove, using white gas. Cooking is performed in two stages, the first cooks the beans under pressure, and the second cooks the onions and “fries” the beans.

Total Fuel Use: 122 g (about 6 fl oz) of white gas

Pressure Cooking the Beans: 70 g (about 3.4 fl oz) of white gas

Refrying: 52 g (about 2.5 fl oz) of white gas

Pressure cooker seated on the stove in the back yard.

Pressure cooker loaded with the beans and other ingredients, prior to pressure cooking.

Pressure cooker at pressure; the heat is too high as shown by the copious steam jet.


Adapted from Vickie Smith’s recipe for Refried Beans

Step 1

1 lb dried pinto beans, soaked at least 4 hours

4 cups pork or beef broth, stock, or water

Add beans and broth to pressure cooker, plus enough water to cover beans by about 2 inches. Stir to mix, lock the lid, and bring to pressure (15 psi) on high heat. Reduce heat to lowest setting that will maintain pressure, and cook for 12 minutes—I cook for 13 at 1 mile altitude. Remove from heat and let pressure drop naturally. Drain beans and mash them with a potato masher until they are to your taste in lumpiness.

Step 2

1/4 cup bacon fat

2 onions, finely chopped

1 mild poblano, pasilla, or Anaheim chile, seeded and chopped

2 cloves garlic, crushed

1 1/2 teaspoon ground cumin

Heat the fat in the pressure cook, add the onions, chiles, and garlic, cumin and cook, stirring, until they are very soft. Add the mashed beans in two or three batches, stirring to mix.

Coffee: Model Fitting Goodness

You have seen the response function contours through the stationary point, and you have seen the 3D response function visualizations. You may recall that I have two competing models, one is derived from the experimental data design for the response surface, and the other is all that data and the screening results too. Is either model any good? Which one is better?

First, a quick review of the model:

q = b0T2 + b1t2 + b2r2 + b3Tt + b4tr + b5Tr + b6T + b7t + b8r + b9

Fitting produces numbers for all those coefficients, b0, b1, etc. Look at the goodness of individual coefficients in the following table. The more asterisks in the significance column, the better the coefficient corresponds to the data. Notice that the “all data” model has an overall terrible goodness, with the residual standard error of 0.7, compared to the residual error of 0.358 for the RSO data only.

All Data

RSO Data Only

Std Error



Estimate Std. Error Signif. Estimate Std. Error Signif.


























































Significance codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ‘ 1

The two graphs below show the quantile normal plots for the residuals from each model. The all-data model looks more systematically erroneous than the RSM data only. The systematic error suggests that the choice of model is not particularly good. Furthermore, it suggests that the estimates of F-statistic should be considered hesitantly, as F-statistics are calculated by assuming normally distributed independent errors.

With hesitation then, let’s examine the F-statistic for these models. The RSO data model produces an F-statistic of 6.4, well above the Box, Hunter, and Hunter criterion that the F-statistic exceeds 4. On the other hand, the all data model’s F-statistic is a miserly value of 1.

All Data RSO Data Only
Degrees of Freedom 9 5
Residual Standard Error 0.749 0.358
F-statistic 1.06 6.43

In conclusion, I get a better fit—to the indicated model—using only some of the data. This is, frankly, an indictment of the model. Anyone can carefully select data to produce a good fit to a model, and the fact that I have done so may be considered a round condemnation of my objectivity. I may reconsider the model, and attempt to drop the least important terms while adding either a three-way interaction or two-way involving squares. Perhaps more creative math will be required.