Jump to content

Talk:Kilobit

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Discussion about centralization took place at Talk:Binary prefix.

1000 or 1024

[edit]

How can a kilobit be 1000 _OR_ 1024 bits?

See binary prefix - Omegatron 00:45, Jun 4, 2005 (UTC)

I just added a more meaningful byte value conversion, as that's something people would likely be looking for....

upper vs lower case

[edit]

The article states that using lower-case 'b' for bits and upper-case 'B' for bytes is unchallenged. It should be noted that it can depend entirely upon context and while this seperation is the most common, lower-case 'b' also often means bytes, and older technical manuals sometimes upper-case 'B' means bits. Carewolf 09:09, 6 September 2007 (UTC)[reply]

kbit is incorrect?

[edit]

"abbreviated kb, sometimes also (incorrectly) kbit" yet the unit of choice for this article is kbit ie. "kbit/s" —Preceding unsigned comment added by 203.152.115.183 (talk) 22:43, 18 September 2007 (UTC)[reply]

"bit" is the SI symbol for bit. So, it's definitely "kbit" under metric system rules. However, "kbit" is not an abbreviation, all abbreviations are forbidden in metric system - it's a symbol (or prefix + unit symbol, actually) —Preceding unsigned comment added by 124.171.97.252 (talk) 09:10, 20 November 2007 (UTC)[reply]


Yes, kb = kbit but their useage was frowned upon as non-standard. A bit like putting "BTW" in the middle of a formal business letter, you just don't do it. "Kb" meant bits and programming in assembly or for cryptology, and knowing the difference was required.

ie) To oversimplify for todays PC's: 8 bits make 1 Byte, therefore pressing 1 key on a keyboard produces a character that is 1 Byte in size.


The KB/Kb confusion occurred back when people made some mistakes typing computer books. And before the 1990's people often used: http://en.wikipedia.org/wiki/Typewriter Plus there are mentally lazy people and incompent software companies which wouldn't take the time to learn that KB = KiloBytes and Kb = Kilobits.


The differences between Binary vs Microsoft/soft bytes (as they were once termed) was only importent before the 1990's. On odd occasions KBB indicated Binary KiloBytes, but was a programming shorthand that wasn't a widely adopted standard, instead Binary KB was considered more acceptable.

The usage of soft bytes is like asking someone the time, if their watch says 9:08 they may say nine o'clock, ten after nine, or quarter after nine.


As a result, I'd hazard guess that the average person just sees the Kb, KB, kB, KiB discussions as meaningless drivel by PC resellers and zombie-bean-counters.

After decades of KB, Kb usage, I can't say I'll bother to learn use kB/kb/KiB/kib/Kib/kIB. —Preceding unsigned comment added by 64.231.89.232 (talk) 14:18, 13 April 2009 (UTC)[reply]

"Kb" as a possible abbreviation, vs. "kb"

[edit]

Other technology definition sources include the possibility of Kb for "kilobit," where here there is only "kb." The kilobyte page notes both KB and kB, as does this page. However, this page does not seem to accept the possibility of "Kb." Was this intentional? Can Kb be included as an option?

One of those sources: http://www.angelfire.com/ny3/diGi8tech/KGlossary.html

kilo-

A prefix indicating 1000 in the metric system. Because computing is based on powers of 2, in this context kilo usually means 2(to the tenth), or 1024. To differentiate between these two uses, a lowercase k is used to indicate 1000 (as in kHz), and an uppercase K to indicate 1024(as in KB).

kilobit

Abbreviated Kb or Kbit. 1024 bits(binary digits).

Robigus (talk) 19:15, 2 October 2008 (UTC)[reply]

Clarification

[edit]

This definitely needs clarification.

For most of the first half of computer usage KB was a kilobit=1024 bits. It had nothing to do with some flowery definition of numbers, but is directly related to binary. 1,2,4,8,16,32,64,128,256,512,1024

It is simple enough: A processing bus uses a number of bits, because it is binary the possibilities are 0->2x-1; in other words, an 8 bit computer can count up to 128 numbers, giving 0->127. It impacts directly on things such as ip addresses, 0->255 x4, which I think most of the public are familiar with to some degree. The whole history of that is not written here.

If a computer could use decimal, then we would be fine to suddenly change everything to 1 KB=1000. IMO it is unfortunate that the distinction is not being shown correctly, and must be made somewhere.

Indeed, the whole issue of this only arose when manufacturers and internet providers tried to con people into thinking they were getting true 1 KByte, when it was 1 kbit, and hard-drives/disks were advertised at the larger size than the byte size. If someone buys a 1TB hard drive, formats it, and wonders why it is smaller than it should be, this page will definitely not help them understand that it is because of this issue. Computers still count in 2x and so the kilobit standard is defacto 1024 Chaosdruid (talk) 04:36, 24 July 2012 (UTC)[reply]

1 kb (1 kilobit) has always been 1024 bits, ever since we first used the term from the 1950s onward. It has nothing to do with the SI system of physics. Just because some people are confused, you can't just change the meaning of a word that has been used for over half a century. It would render all computerbooks written up until some 10 years ago invalid. 84.26.190.3 (talk) 08:30, 9 December 2022 (UTC)[reply]
Do you have a reliable source to back up your claim that 'kilobit' was used with a binary meaning in the 1950s? Dondervogel 2 (talk) 09:34, 9 December 2022 (UTC)[reply]

Upvote change to redirect

[edit]

I see that this article was changed to a redirect to byte a couple years ago. That was a good change. I note that here since I think similar should be done for kilobyte, megabyte, and gigabyte. Stevebroshar (talk) 11:34, 25 December 2024 (UTC)[reply]