Source |
Roman numerals were used exclusively in the Middle Ages for a long time. They were inconvenient for large sums, but Western Europe had no other option. Eventually, however, along came so-called Arabic numerals. They were introduced by Leonard of Pisa, better known today as Fibonacci. Fibonacci's Liber abaci ("Book of calculating"; it wasn't about the abacus) introduced Arabic numerals (which probably came originally from India) and a decimal system, with "places" for ones, tens, hundreds, and so forth. With these new numbers came something very new and strange to them: what we call "zero."
Of course they did not call it "zero" when it was first introduced. The Arabic word was ṣifr, or zephir, which when filtered through Old French became cifre and eventually the English cipher. John Sacrobosco (c.1195 - c.1256; mentioned here) in The Craft of Numbering explained:
A cipher tokens nought, but he makes the figure that comes after to betoken more than he should; thus 10. Here the figure of 1 betokens 10, and if the cipher were away, ..., he should betoken only 1, for then he should stand in the first place. [paraphrased]The concept of the zero was so mysterious, the new number system so different and difficult to master (the British Exchequer clung to Roman numerals—at least partially—until the mid-17th century), that using them seemed like a secret code. The words encipher and decipher grew from the ability to make and read this code and understand the zero.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.