During Spring Break this year, members of the Family Trivium household have been working on robotics. One of us went to an introductory clinic while others of us worked on an independent project. One of the goals of the independent project was autonomous obstacle avoidance accomplished through the use of at least one sensor.

What is a Sensor?
Looking at the Merriam-Webster Dictionary, we find a sensor defined as

a device that responds to a physical stimulus (as heat, light, sound, pressure, magnetism, or a particular motion) and transmits a resulting impulse (as for measurement or operating a control)

We would argue that the sensor itself does not respond to physical stimulus. Rather, it simply takes, as input, the physical stimulus before transmitting an impulse. This is the case in our robotics projects where another component of the robot responds to the physical stimulus after the controller (brain) receives and processes the impulse provided by the sensor.

Wikipedia gives a more thorough, and practical, definition of sensor:

A sensor is a transducer whose purpose is to sense (that is, to detect) some characteristic of its environs. It detects events or changes in quantities and provides a corresponding output, generally as an electrical or optical signal; for example, a thermocouple converts temperature to an output voltage. But a mercury-in-glass thermometer is also a sensor; it converts the measured temperature into expansion and contraction of a liquid which can be read on a calibrated glass tube.

An Example
In the case of our independent robotics project, an ultrasonic distance sensor was used to help our robot determine when it was nearing an object. This sensor would continuously send a distance measurement from the front of the robot to the controller. When the controller receives a measurement under a certain threshold, it would send a command to the motors to stop, before completing some more actions and having more communications with the distance sensor to determine in which direction to move next. One member of the Family Trivium household likened the distance sensor to being the eyes of a robot, but upon an understanding of the input and output of the device revised the analogy to it being like a bat’s ears during echolocation.


Hold on! Didn’t we just cover this? Well, yes, and no. In our last post, we looked at the motor, particularly as it relates to robotics. Back in January, we covered motor oil, and even revisited it with an examination of synthetic motor oil in February – both are used in the engines of the vast majority of vehicles on US roads. Our recent post on computers, referred to two particular designs: the Difference Engine and the Analytical Engine. The bare bones Wikipedia entry for motor immediately provides a link to the Wikipedia entry for engine. As we did with motor, we’ll first look to the Online Etymology Dictionary regarding the origins of the word engine:

c.1300, “mechanical device,” especially one used in war;  “manner of construction,” also “skill, craft, innate ability; deceitfulness, trickery,” from Old French engin “skill, wit, cleverness,” also “trick, deceit, stratagem; war machine” (12c.), from Latin ingenium “inborn qualities, talent” (see ingenious), in Late Latin “a war engine, battering ram” (Tertullian, Isidore of Seville). Sense of “device that converts energy to mechanical power” is 18c.; in 19c. especially of steam engines.

We find that the word engine seems to come up in the century preceding the use of the word motor. Engine seems to be more generic, simply describing a “mechanical device,” while motor is used to describe a device that creates motion. Certainly, Charles Babbage’s computer designs were plans for mechanical devices. According to William Harris’ article on Babbage’s designs, the functional examples of the Difference Engine that were produced in 1991 had no less than 8,000 moving parts and weighed in at 15 tons each. While the device was designed with the ability to calculate artillery tables, it was not meant to be used directly in war, though it might make decent cover with its impressive weight and its height and length of 7 feet and 11 feet, respectively.

Referring back to the Online Etymology Dictionary entry for the word engine, more recent usage refers more specifically to a device which converts energy into mechanical power. Most commonly today, we think about the internal combustion engines in our automobiles, which convert the energy stored in fuel into movement. As used in robots, and increasingly in consumer automobiles, the electric motor converts electrical energy into movement.

It would seem that a motor is an engine, but an engine isn’t necessarily a motor. The two are often used interchangeably when referring to internal combustion engines in automobiles.


A while back, we did a series of posts on the robot. This topic surfaces again as we approach spring break. During the week, the youngest member of the Family Trivium household is looking forward to attending an introductory robotics clinic while others of us work on a slightly more advanced robotics project. Yes, this is our idea of fun.

In one post, we mention that one likely component of the robot is a motor. To understand a machine, you need to understand the components. We’ll start by referencing the Online Etymology Dictionary to help us understand the word:

mid-15c., “controller, prime mover,” from Latin motor, literally “mover,” agent noun from past participle stem of movere “to move” (see move (v.))…

As we found, in our previous post, that one of the four major components of a robot is moving parts, it makes perfect sense that a robot will likely contain at least one motor. The independent robotics project that we are working on during spring break will have two motors. We’ll just need to wait and see how many motors are manipulated in the introductory clinic.

Check back as we expand on the concepts behind robotics, as well as machines and technology in general. Don’t worry food fans – we’ll still be eating, so we’re certain to be mixing in some discourse on bites to go along with all the discussion of bytes.


In recent posts on food, we’ve examined eats that were arguably more American than apple pie. While not as popular as ketchup and pizza, computers, according to a Census Bureau report, are in 84% of US households.

In the Family Trivium home, we have more than one and it seems like we are still always competing to use them to complete homework, pay bills, research upcoming purchases, and last, but not least, work on It seems like modern Americans do almost everything with their computers, and without stopping to appreciate these devices and how they came to be.

Once again turning to the ever present Merriam-Webster dictionary, we find that a computer is:

an electronic machine that can store and work with large amounts of information

This is a serviceable definition to explain a modern computer, but we can gain more insight from the Online Etymology Dictionary:

1640s, “one who calculates,” agent noun from compute (v.)…

William Harris has written an article over at, that condenses a fairly comprehensive understanding of the earliest development of what we think of today as a computer. In it, we learn that the first “computers” were in fact persons whose occupation was to perform calculations and enter the results into tables referenced by others. Harris’s examples include aiming artillery shells and calculating taxes.

According to Harris, the first device resembling our modern conception of the computer was developed by British mathematician Charles Babbage. By 1832, Babbage had developed a prototype of the “Difference Engine” which is designed to make tables. Subsequently, Babbage designed the “Analytical Engine” to take data input on punched cards, perform more complex calculations, including multiplication and division, and then record the results on paper.  Babbage’s designs were more ambitious than his times would allow, but in 1991 his design was proved out when two functional copies of his “Difference Engine” were built to his specifications.

Stay tuned for future posts where we’ll take a look at the evolution of the mechanical device into a more modern form.


In recent posts, the beginning of grilling season has had us talking hot dogs. In our area, we’ve already seen upper 80s, so we are into grilling mode. We’ve already grilled up quite a few hot dogs on our patio, but we have one picky eater in the Family Trivium house. When it comes to grilled things this person will pretty much only eat cheese brats, which like hot dogs are a type of sausage.

…but what is a sausage? For a basic, seemingly obvious definition, we turn to the Merriam-Webster Dictionary:

spicy ground meat (such as pork) that is usually stuffed into a narrow tube of skin or made into a small flat cake

So, it’s just spiced ground meat, optionally stuffed in a tube? Essentially, yes. In the case of cheese brats, the meat is pork and one of the spices is cheese, typically cheddar.

Who invented sausage and where was it invented? Like many things with old foods, the answers to these questions is not simple.

There seems to be a consensus that the Greeks were to first to refer to sausage in literature, as indicated over at A form of blood sausage is referenced in the Odyssey, which,according to Wikipedia, was likely written in the 8th Century BC.

According to the Online Etymology Dictionary, the word “sausage” first appears in the 15th Century (CE) as “sawsyge, from Old North French saussiche” and Latin “salsus” meaning “salted.” This meshes with‘s attempt to connect sausage to the earliest humans butchering the animals that they hunted and looking to extend their “shelf life” by adding spices. It seems reasonable that salt would be one of these ingredients as it is can be found many places on Earth and is still used in most charcuterie today.

Do you have a favorite type of sausage? Please share in the comments.


Like so many people do at the beginning of the year, we made some lifestyle changes in our family. It seems like we do this every year, but this year, the changes seem to have a much better chance of sticking. How and why is a story for another day.

One of our changes has been to cut back (not necessarily cut out) consumption of soft drinks. We weren’t big “sodaholics” before – maybe having 2-4 per week. Now, we have 0-1 per week. Perhaps the recent news articles linking caramel color used in soft drinks to cancer has helped to encourage us to cut the habit. More likely is the fact that we now only eat restaurant – or fast food – meals one per week, rather than the previous 2-4 times per week.

Over the last weekend, during our outing, we were discussing that there seemed to be affiliations between different fast food chains and soft drink makers. Regardless of brand, these soft drink makers seem to be based on cola. In past posts, we’ve looked at root beer and ginger ale. Where does cola fit in to the mix?

First, What is cola?
According to, cola is a caffeinated, optionally carbonated, beverage flavored with kola nut and coca leaves (cocaine).  Modern colas contain neither kola nut, nor coca leaves, instead relying upon other, less controversial, ingredients to supply the flavor and the caffeine. The sweeteners used today are controversial on other levels- we’ll shelve that topic for now.

Turning to, we find that cola was invented by pharmacist John Stith Pemberton in Atlanta in 1886. Prior to inventing the soft drink, Pemberton was known for his alcoholic Coca wine, which combined wine with the extracts of kola nut and coca leaf. When Atlanta enacted prohibition in 1886, Pemberton substituted sugar syrup for wine and Coca-Cola was born.

While we cannot put an exact date on the creation of root beer and ginger ale, we find that they both predate cola, with both thought to have first appeared in the 1850s in North America. Much like root beer and ginger ale, we also find that the soft drink was developed from an alcoholic form.

Do you drink soda? …or do you call it pop, or something else? What’s your favorite soft drink? Share in the comments.

LED Lighting

An ongoing project in the FamilyTrivium Household has been a conversion from incandescent and fluorescent to LED lighting. On this blog, we’ve already examined the first two and, in this post, we now turn our attention to the last.

We turn, once more, to the Learning section over at to find that “LED” stands for “light-emitting diode.” LED is a solid-state technology where a solid material encapsulates the materials used to generate light. This means that the technology is less affected by shock and vibration, than incandescent and fluorescent counterparts, and should have a longer service life.

The first visible-spectrum LED was developed by Nick Holonyak, Jr. in 1962 according to At the time, Holonyak was consulting for GE and each LED cost approximately $200 to produce. Additional, early LEDs were only able to output red light and were quite limited in their output. In other words, their utility was limited to applications such as indicator lamps. Only more recently have LEDs evolved to area lighting usage.

In our house, we’ve reached the point the point where LED bulbs are in use in the areas where we spend the most time: living room, kitchen, bedrooms. We do still have some CFLs and an incandescent or two being used in our bathrooms, garage and basement. The LEDs that we have put into service put out a nice natural color of lighting reminiscent of the old soft white 40 or 60 watt incandescent bulbs while only consuming 5-10 watts. As an added bonus, compared to fluorescent bulbs, they turn on right away and reach full brightness immediately, and without the mercury content.

Fluorescent Lamp

In an earlier post, we studied the incandescent light bulb. Today we will examine one of its relatives: the fluorescent lamp.

According to, fluorescent lamps are “gas-discharge” lamps that “use electricity emitted from cathodes to excite mercury vapor contained within the glass envelope, using a process known as inelastic scattering.” When excited, the mercury vapor emits ultraviolet light, which causes phosphors, also contained in the lamp, to glow, or produce visible light.

While Henrich Geissler is credited with creating light emitting tubes in the 1850s, it was Daniel McFarlan Moore who was able to develop the technology into something commercially viable, according to his bio at the Smithsonian Institution. Moore credited an improved ability to seal the glass tube, compared to that at Geissler’s time, with allowing this to happen.

The Smithsonian’s article on Moore reports that he spent some time working for Thomas Edison, prior to developing his “Moore Lamp” in 1898. Perhaps this is where he learned about contemporary glass sealing techniques that would allow his product to be a success. Edison inquired “What’s wrong with my lamp?” Moore is quoted as having said “It’s too small, too hot, and too red.”

Since our original post on the incandescent bulb, more (not Moore) of the CFLs that we have been using in our home have come to the end of their life. Encouraged by dropping prices, good sales, and even some manufacturers coupons, we have continued our transition to LED lighting, to which we will next turn our attention.

Pizza History

In our last post, we learned that this convenience food was first served up in 18th Century Naples. We found that the modern interpretation is similar to the original, but how did pizza gain notoriety and become such an American staple?

A Royal Endorsement
In 1889, King Umberto I and Queen Margherita of the now unified Italy, visited Naples and ordered a variety of pizzas according to A Slice of History over at The Queen’s favorite was the “pizza mozzarella” with toppings of mozzarella cheese, tomatoes, and basil. It’s unknown whether this pizza’s mimicry of the Italian flag influenced the Queen’s fondness, but, at that time, the combination of toppings was named “pizza Margherita.”

Prior to the Neapolitan pizza, Egyptians, Greeks, and Romans, had also consumed flatbreads with toppings, but none of these managed to become as popular. Perhaps it was a lack of royal endorsement, or, more likely, the time just wasn’t right.

American Expansion
In Ed Levine’s article A Slice of Heaven over at, we learn that by the dawn of the 20th Century, economic conditions would send millions of Italian workers to America in search of factory jobs and other opportunities. Along with these workers came the recipe for homemade pizza.

By 1905, Lombardi’s Pizza had opened in New York City. This first American pizzeria is still in operation today. By 1943, pizzerias had opened in other large U.S. cities, but the dish was still mostly thought of as a “poor person’s food eaten by Italians in the urban enclaves in which they had settled” as stated by Levine.

Levine reports that pizza, like many things, spread across America after World War II. Many GIs had been stationed in Italy and exposed to pizza. They returned to the States with a craving for this, at the time, exotic dish.

According to Levine, one War veteran named Ira Nevin used his oven repair experience to develop the Baker’s Pride gas-fired pizza oven. He marketed this device, which worked in concert with a Hobart Mixer, to would-be entrepreneurs, facilitating the spread of pizza outside the Italian community in America.

By 1960, Three major chains had been founded in the U.S. Pizza Hut started in 1958 when two brothers borrowed $600 from their mom to start their own pizzeria. Little Caesar’s was started by Mike and Marian Ilitch in 1959 when an injury ended Mike’s baseball career. Domino’s Pizza opened it’s doors in 1960, in Ypsilanti, Michigan.

These, and other, budget-oriented pizza chains seem to have given independent pizzerias quite a run for their money over the last half-century. In our area, we’ve noticed a resurgence of local pizzerias and, more recently, higher-end chains.

What’s your favorite pizzeria, and which of their pies is the best? Share in the comments.


In our last post, we identified ketchup as “ubiquitously American,” despite it’s Asian origin. Today we turn our attention to another food item that dominates American diets: Pizza.

According to, 94 percent of Americans eat pizza on a regular basis, with 93 percent having done so in the last month. In one day, Americans will eat 100 acres of pizza, which calculates to a rate of about 350 slices per second. Thanks in part to pizza, the most popular category of ethnic food in America is Italian.

So, what exactly is a pizza?

Referring to the Merriam-Webster Dictionary, we find that pizza is “a food made from flat, usually round bread that is topped with usually tomato sauce and cheese and often with meat or vegetables.”

From our look at other topics, we know that modern foods tend to be an evolution of ancient versions. Was pizza always tomato sauce and cheese on top of bread?

According to’s article on the history of pizza, the answer is likely yes. Pizza was most likely created in Naples in the 18th Century as an inexpensive, quick-to-eat food for the working poor. It is described as “flatbreads with various toppings, eaten for any meal and sold by street vendors or informal restaurants.” Common ingredients were “tomatoes, cheese, oil, anchovies and garlic.” So truly, not much has changed.

As with the hot dog sold on a bun from street carts in New York City, we see pattern of protein, fat and sauces isolated from the consumer’s hand by a piece bread. This has the effect of eliminating the need for eating utensils, while also keeping hands clean, relatively speaking. With the dish being immediately edible, this amounts to what we think of today as a “convenience food.”

Check back for our next article where we’ll try to figure out how this Neapolitan commoner’s food became such an American staple.