bb48e699c4b4e9e6120fa83bd29f9716.ppt
- Количество слайдов: 61
• Multi Media Materials • Mu l ti Me di a. M at er ial s Multi Media Materials
What’s Multimedia ? • This is an introduction to multimedia. Multimedia is very hard to define, but it does involve a combination of existing technologies. For this reason we will first look at the components that make up a multimedia application, text, graphics etc. , what formats are used, what you should consider when buying hardware, and then at some multimedia applications. This is version 1 of this document, and is lacking in several areas, particularly Unix platforms. This will be remedied in future versions
Text • The layout of your text will depend very much on the application. A largely text based application may have a screen full of text to read, whereas more graphical applications may try to keep text restricted to small boxes of around fifty words or less. Text is much harder to read from a computer monitor than from a book, and a screen full of small text will be very off putting. If your application determines the fonts, choose them carefully. Some fonts are more legible than others, particularly at small sizes. Generally, unlike in text documents sans serif fonts, such as Helvetica, are more legible than serif fonts, such as Times. If your application is cross platform you should remember that font sizes may change on different platforms, for example Microsoft Windows fonts are bigger than the equivalent font on a Mac. LINKS to font map.
In general • • Make sure the text is legible at the size it will be viewed Don't use a lot of different font typefaces Adjust spacings if necessary (line spacing, kerning) Try different colours to make text more legible or stand out • For headings try other effects, such as drop shadows or making the text curved. • Have plenty of white space round headlines Sentences in mixed case are easier to read than those in just capitals.
ASCII • ASCII stands for the American Standard Code for Information Interchange. It assigns a number to 128 different characters. These include upper and lowercase letters, punctuation, numbers and 32 control characters such as line and form feed. Because ASCII code numbers always represent the same characters, apart from the 32 control characters, an ASCII document can be read by any computer system. This makes it ideal as a format for transferring text between systems, though no formatting information is stored with it. An extended character set (128 -255) is available on most systems, but these differ from system to system.
OCR • OCR (Optical Character Recognition) is used to convert text in a bitmap to an ASCII file or word processor file. The printed form is scanned and the resulting bitmapped characters are then converted into ASCII text. The accuracy of OCR applications varies, and depends a great deal on the type of text being scanned. Clear, typed or printed documents with mono- spaced fonts produce the best accuracy. Many packages claim about 99% accuracy, but even this can mean a great deal of editing to find the mistakes in a large document.
Vector Graphics • Vector Graphics are built up from primitives, basic drawing instructions such as line, rectangle, ellipse. These primitives may be grouped together to form objects. All vector graphics are computer generated. They can be produced by many packages including Computer-Aided Design (CAD) packages, to generate architects' drawings for example. They can also be produced by many drawing packages such as Corel. Draw! and are good for storing diagrams. In a vector format each primitive is described, a vector format file consists of a series of commands such as rectangle x 1, y 1, x 2, y 2. x 1, y 1 etc. are the parameters which determine where and how big the rectangle will be. There may also be colours associated with the command. This allows complex drawings to be stored as very compact files. These drawings can usually be resized easily without losing any information, since it is just a matter of scaling the parameters. The size of a vector graphics file is directly related to the number of objects it contains, and a file with many objects will not only be large, but take much longer to reconstruct. Vector graphics can be used to represent 'real world' images, but this requires a great deal of processing, as it is difficult to break these down into simple shapes. These images are usually stored in bitmaps or pixmaps.
Bitmaps and Pixmaps Bitmaps, which are also know as raster graphics, are composed of a matrix of dots called pixels. For a monochrome image each pixel is either black or white, but for colour images each pixel can be any colour from a given range. The colour depth of the image depends on the amount of memory used to store each pixel. For example, in an image with an 8 bit colour depth each pixel can be one of 256 different colours (or shades of grey). The most common colour depths are, 4 bit=16 colours, 8 bit =256 colours, 16 bit=32, 768 colours, 24 bit=16. 7 million colours. Colour look-up tables (CLUT) or palettes are often stored with pixmaps. These are an array of colours described as accurately as possible and referred to by specifying their position in the array. The example below shows a 256 colour look-up table where the colours are stored as 24 bit. This means that an image using this look-up table could consist of up to 256 colours chosen from 16. 7 million colours. For example, where 43 is stored in the file, the colour red=181, green=113 and blue=86 is displayed.
The main advantage of bitmaps is that they can store very large amounts of information, a bitmap of sufficient resolution can store every detail of a scanned photograph. Their main disadvantage is the large amount of space it takes to do this. The amount of space needed to store a bitmap depends on the resolution, ie. , the number of horizontal and vertical pixels, and its colour depth. A 640 x 480 bitmap with 256 colours will need 8 x 640 x 480=2, 457, 600 bits or 300 K of storage space. Although 640 x 480 was until recently quite a common resolution for computer screens, it is quite a low resolution as far as the human eye is concerned. To store a truly realistic image, a much higher resolution, and therefore much more storage space, is required.
The second disadvantage of bitmaps is that they usually degrade when they are resized. If the bitmap is shrunk some pixels have to be discarded, which means loosing information. If it is expanded pixels have to be created. This is usually done by giving the new pixel a colour based on that of its neighbours. This tends to create a blocky effect. Bitmaps can be created in a number of ways. Paint and drawing packages usually store images as bitmaps. They can be images which have been scanned in or digitised from a photograph, piece of artwork or a video, or they could be generated by capturing a screen snapshot form your computer. The figure below shows various capture options.
File Formats and Compression § As bitmaps are very large, they are often stored in compressed formats. Compression methods can be divided into two main types, lossless and lossy. Lossless techniques do not loose any information, so the image can be recreated exactly. The simplest example is run- length encoding (RLE). If an image has large areas of the same colour, rather than storing the colour for each pixel, in RLE the colour is stored once with the number of times it is repeated. There a number of other forms of lossless compression, such as Huffman and LZW.
§ GIF (Graphics Interchange format) is a common format that uses LZW compression, but cannot strictly be considered to be lossless as it is restricted to 256 colours. This restriction means it is not suitable for photo-realistic images that will be displayed on machines capable of displaying more than 256 colours. Unless an image is very simple, lossless compression will not give good compression ratios, usually no more than 4: 1, ie. , the compressed image is a quarter the size of the uncompressed image.
§ Recently there has been a lot of talk about GIF and the LZW (Lempel-Ziv-Welch) patent. LZW is a form of compression that lies at the heart of GIF, and a number of other compression formats. LZW compression was originally patented in 1985, and the patent is now owned by Unisys, and from this year all products using it (unless they are freeware) will have to be licensed, this includes shareware packages. However, products prior to 1995, and free products do not have to be licensed. You do not need a licence to store or view GIF files, the appropriate licences will have been obtained by the makers of the software. Any software which uses LZW compression is affected by this patent, this includes software which reads/writes some types of
Lossy compression n Lossy compression methods give much better compression ratios, but information is discarded in the compression resulting in some loss of quality. JPEG (Joint Photographic Expert Group) is an ISO (International Standards Organisation) standard form of lossy compression. In JPEG compression the file is first compressed using a method called DCT (Discrete Cosine Transform) which is a lossless method, but the resulting information is then quantized. This is a lossy step which accounts for most of the compression. This lossy compression leads to loss of detail which is most noticeable at sharp edges and straight lines. It is, therefore, not really suitable for text and line drawings. However, it does offer very good compression ratios, which can be changed to preserve quality, and does not have the 256 colour limit of GIF, and is therefore suitable for photographic type images.
n n Some common file formats are shown in the table below. Some of the formats were developed for particular products but have become widely accepted elsewhere. Bitmap Vector BAP - Microsoft Windows bitmap CGM Computer Graphics Metafile GIF - Graphics Interchange Format DXF - Used by CAD packages PCX - PC Paintbrush WMF - Windows metafile. PICT - Macintosh Postscript TGA - Targa HPGL TIFF - Tagged Image File Format JPEG. JPG - ISO standard compression XBM - X Windows
• Most packages will read lots of different formats, and there are many conversion utilities available to convert between them (see Appendix B). Which format you choose may well depend on the capabilities of the applications you are using, but if possible you should use standard, non-application specific formats if possible. This will make your images available to a wider range of people and also ensure you will still be able to use them if you change applications. You should also be aware of space considerations. A windows bitmapped file (BMP) is uncompressed and much larger than the same image stored as a JPEG file. However, compressed files must be decompressed to view them, and the time taken to do this may in some cases outweigh the size considerations.
Sound MIDI (Musical Instrument Digital Interface) is a communications standard developed for electronic musical instruments and computers. In some ways it is the sound equivalent of vector graphics. It is not digitized sound, but a series of commands which a MIDI playback device interprets to reproduce the sound, for example the pressing of a piano key. Like vector graphics MIDI files are very compact, however, how the sounds produced by the MIDI file depend on the playback device, and it may sound different from one machine to the next. MIDI files are only suitable for recording music, they cannot be used to store dialogue. They are also more difficult to edit and manipulate than digitised sound files, though if you have the necessary skills every detail can be manipulated.
Digital sound n Digital sound is sampled, that is at regular intervals a sample of the sound is taken and stored. The frequency at which the sound is sampled is called the sampling rate, and is usually in the range 11 KHz-44 Khz. The sample size is the amount of information stored at each sample point. This is usually 8 or 16 bit. A higher sample size means the sound will be more accurately reproduced, but the digital file size will of course be larger. The total size of a mono raw (uncompressed) digital sound file will therefore be sample_rate*sample_size*duration (s). One minute of sound at 44 KHz, 16 bit sound will be 5. 25 Mb, or 10. 5 Mb for stereo (CD quality sound). 22 KHz, 8 bit sound is a reasonable compromise between quality and file size for most purposes, and one minute of stereo sound will be 2. 6 Mb.
n There is a range of digital sound formats. Raw sound files are platform independent and are therefore useful as an interchange format. However, in order to play them back correctly you must know the sampling rate used to generate them, as it is not stored in the header. Microsoft Windows uses its own format, WAV, which can store a range of sample rates up to 44 KHz, and 8 or 16 bit sample size. The VOC format is used by Soundblaster soundcards found in many PCs, and offers good compression. Apple Macs use AIFF (Audio Interchange File Format) format, based on the Amiga IFF format. This format is also used on SGI machines. Sun and Ne. XT use au format. This supports 8 bit u-law encoding or 16 bit linear coding. There a number of programs that will convert between these formats.
Video Analogue Video n Analogue video can come from a video disc player, videotape recorder or live television. There are two main standards for storing analogue video. Phase Alternating Line (PAL), used in the UK and most of Europe. National Television System Committee (NTSC) is used in North and South America and Japan. There is also a third standard, SECAM used in France. Some video players and videodisc players can now play both PAL and NTSC format videos.
n In NTSC a single frame consists of 525 horizontal scan lines. The picture is laid down on the screen in two passes, first odd numbered lines, then even at 60 passes/s or 60 Hz. Using two passes like this is known as interlacing, and helps prevent flicker. Normally a computer monitor will be non-interlaced, that is all the lines are drawn in one pass. PAL uses 625 lines with a frame rate of 25 frames/s. Like NTSC it is interlaced at 50 Hz.
HDTV n High Definition Television (HDTV) will probably be the next standard. This provides 1200 lines, using a 16: 9 aspect ratio (wide screen), rather than the 4: 3 currently used. Currently there are three different HDTV standards, two of which are analogue and the third digital.
Digital Video l As with images there a variety of digital video formats, mostly produced by commercial companies. Since digital video files are very large, the formats all include some kind of compression, and formats such as AVI - Microsoft's Video for Windows and Apple's Quick. Time use several different kinds of compression.
l The ISO compression standard for video is called MPEG (Moving Pictures Expert Group). An interim standard, MPEG-1 was released some time ago. MPEG-2 allows compression of the soundtrack with the video. There are three types of frame in MPEG, I (intra), P (predicted) and B (bidirectional) frames. The I frames are encoded as a still image, using DCT encoding. The P frames are predicted from the most recent I or P. The B frames are predicted from the closest two I or P frames (past and future). This helps to improve the signal to noise ratio (SNR), particularly at low bit rates, though it does increase computational complexity and bandwidth. A typical sequence of frame might be IBBPBBPBBPBBIBB. . Since in order to decode a B frame you need the I or P frame that comes after it, the frames are not sent in sequential order. MPEG-1 audio will support 2 channels and sampling rates of 32, 44. 1 and 48 KHz, and the compression ratio is usually 1: 6, giving 96 kbits/s. MPEG-2 is similar to MPEG-1, but has been extended to cover a wider range of applications. Whereas MPEG-1 was optimised for delivery on CD-ROM at 1. 15 Mbit/s, MPEG-2 typically works at 4 Mbit/s. MPEG-2 audio will supply up to 5 full bandwith channels and up to 7 commentary channels, and also allows half sampling rates (eg 22. 05 KHz). MJPEG or Motion JPEG simply means each frame of the video has been compressed using JPEG.
l l DVI is a proprietary standard based on the Intel i 750 chipset. It was used by a number of video capture cards, such as those by Action Media, but is now no longer supported. Another ISO standard for video compression is H. 261. This is designed to be used for video conferencing applications where the video will usually consist of a head and shoulders shot of the participants. This does not change very much from frame to frame and so predicting from one frame to the next can yield very high compression ratios.
l l AVI (Audio Video Interleaved) plays full-motion video and audio in a small window at about 15 frames/s (software alone). Like Quicktime it also uses a number of different codecs. Video usually consists of a video track and two audio tracks. Quicktime is multitrack, and so has the potential for a wide variety of uses. Quicktime uses a number of different compression schemes including a JPEG, and a Kodak Photo-CD codec.
Analogue Video Cameras Standard camcorders can be used to produce video provided they have a 'video out' facility. It is better to capture the video directly, rather than going through tape first. The signal from a video camera contains three channels of colour information, and if these are transmitted separately, it is called RGB (red, green, blue). This gives the highest quality video. The video can also be transmitted as two chroma channels and one brightness. This is component video. When the signals are mixed it is called composite video. SVHS and Hi-8 video formats keep the colour and luminance information on two separate tracks, giving better quality than VHS. Hi-8 is better quality than S-VHS and you can make unlimited VHS copies from it without the degradation that occurs when copying from VHS to VHS.
One of the most important accessories for a camcorder will be a zoom lens. This should be as good quality as possible, as there are differences in resolution and optical quality. Standard consumer zoom lenses are usually around 6 or 8: 1.
Video discs/Laserdiscs have been on the market since 1978, and the largest market is in the US and Japan. They store movies or stills, or a combinations of the two as analogue, either in NTSC or PAL format, and offer much better quality the standard VHS tapes. They can also contain an analogue or digital sound track (with NTSC you can have both). Many of the current laserdisc players can play either PAL or NTSC discs. The most common form of disc is 12" double sided, but there also 8" and 5" discs available. The discs can be constant linear velocity (CLV) or constant angular velocity (CAV). CLV is most common form for movies, as CLV discs can hold 60 minutes of video per side, but they are only suitable for continuous play. CAV hold only about half as much, as the recording density is reduced towards the outer edge, but individual frames can be randomly accessed, and therefore these are often used for storing still images. A CAV disc can hold about 54, 000 images. Unless it is stated, commercial discs are not resource discs, and the images off them should not be used without permission from the copyright holders.
Video Overlay boards In order to display analogue video on a computer monitor a video overlay board is required. Most boards have chroma keys, where one colour is chosen and that colour becomes transparent, allowing the video window to be seen through the computer image. l Video boards can often digitize frames from the video. For example the Screen Machine II board (available for both PC and Mac) can capture a frame at full (736 x 560), half or quarter screen size to several file formats. When capturing a frame in this way, if you do not know what size the final image will be, capture it at full screen to maximise quality. If you know however that the final image will only be displayed at 320 x 200 for example, then capturing it at half size will give equally good quality. l The main advantage of using analogue video and an overlay board is that high quality full screen, full motion video will be available. However, delivering video in this way will be more expensive that digital video. l
DVI o DVI (Digital Video Interactive) allows full screen playback of motion video at 30 frames/second a synchronised sound track through a special card. DVI was developed by Intel who also produced the Action. Media II DVI card which contains the i 750 chipset. This card is no longer supported, but software is available from Intel's FTP site to convert from DVI to AVI. Some of the compression techniques used in DVI have been incorporated into Intel's software based Indeo codec for Video for Windows. Although this is software, its performance can be improved using hardware based on the i 750, such as the Action. Media card.
MPEG compression n There a number of MPEG hardware compression and playback cards available for a range of systems. Hardware compression is much faster, and allows greater resolution than in software and is of better quality. Hardware playback also allows playback at more frames/second with larger windows and often supports the full MPEG audio capabilities, for example the Reel Magic PC card provides video and sound playback in a 320 x 200 pixel window at 30 frames/second.
Other Accelerators • Hardware accelerators are available for Microsoft's Video for Windows, which will allow AVI files to be played full screen at 30 frames/second, for example PC Prime Time by.
Software n Quick. Time is an open proprietary standard designed by Apple. It uses a number of different codecs, including motion JPEG (MJPEG), RLE and Cinepak, and runs in software alone on Macs, but players are also available for Microsoft Windows and Unix machines. The runtime players are freely distributable and available from Apple. Some authoring packages on the PC also support Quicktime movies. The shareware package Xanim plays Quicktime files (and many other formats) on Unix machines.
n Video for Windows provides a format called Audio Video Interleaved (AVI), for which a number of codecs have been developed by Microsoft and third party developers. The video for windows software itself comes with tools to capture, edit and play back video sequences, though Microsoft are no longer developing these tools, but will be relying on products from third party developers. Video for windows was originally release with two codes, Video 1 and RLE (run length encoding), but other codecs are being released including Indeo from Intel and Cinepak. Both Indeo and Cinepak offer Mac compatibility and audio compression. Which codec you choose depends on the type of video you are compressing, the machine it will be delivered on, and the delivery method eg. CD-ROM. Indeo, for example, offers better quality on high end machines than Video 1, but Video 1 may be better on low end machines. The runtime player for Video for Windows is freely distributable, and can be obtained from Microsoft's FTP site (ftp. microsoft. com) and other Internet sites.
n There a number of MPEG encoders and decoders available in software alone, both 'professional' software and shareware or public domain. These mostly support MPEG-1 only, and most only video or audio, not both. Some of the packages will also convert between MPEG and other formats such as AVI and Quicktime. The Xing encoders/decoders only support I frames, rather than all three types of frame (IBP), and so some MPEG movies will not play back correctly on this software. Many of the packages are restricted to small windows for playback. For example the VMPEG software (for Dos) can playback 16 frames/second in a 160 x 120 window on a 386 DX 33. Players are available for most platforms including PC, Mac, X Windows, Amiga and Next machines.
Animation n Simple animations can be done in many authoring packages by just putting an object on the screen and dragging it around with the mouse. The path is remembered, and on play back the object moves along it. For many purposes these very simple animations may be sufficient. For more complex animation a specialised package may be necessary.
n Traditional cell frame animation requires key frames (the first and last frames), with the series of frames in between drawn in a process called tweening. Computer animation programs employ basically the same techniques, but how the frames are created is of course different. Although you can determine the frame rate, in practice this may well depend on your hardware, how fast changes can be computed and screens can be redrawn.
n Animations can be stored in 'standard' digital movie formats such as AVI and Quicktime, but there also formats which were designed especially with computer animation in mind, such as the FLI and FLC formats from Animator Pro.
Hardware and Software Sound Capture If you require really high quality sound, then it is probably best done in a special studio. However, if sampling at 22 KHz is sufficient, then a standard good quality tape deck will do. Most video cassette recorders also have good audio output.
Once you have captured your sounds you will usually need to edit them to some extent. For digital files there are many suitable programs, such as Wave. Edit which comes with the Windows Multimedia Extensions. These programs allow you to cut and paste sounds and often add special effects. Some programs will also allow you to mix MIDI and digital files. Producing and editing MIDI files requires someone with some musical knowledge and is beyond the scope of this introduction.
Playback Sound cards • Macs all come with 8 bit sound, but for 16 bit sound add-in cards are needed. There an increasing number of cards available that provide both 16 bit stereo audio in and out, and video in and out. • PCs come with no real ability to play sounds, special sound cards must be bought. The MPC Level II specification calls for 16 bit sound, but for simple sound effects 8 bit would be sufficient. For playing music a 16 bit soundcard is required.
Speakers/Headphones • The kind of speakers you use depends very much on what kind of application you are running/developing. For multimedia presentations in front of an audience, external speakers with their own amplifiers will be necessary.
Data Storage Hard disks • • • Multimedia requires large amounts of disk space, and a hard disk should be at least 200 MB (more for development) for a stand alone machine. Even when multimedia is delivered over a network a large disk may be useful, for example when running Windows over a network. Hard disks can be IDE or SCSI. In machines other than PCs they will usually be SCSI. For a PC, IDE drives have a faster response time, and the controllers are generally cheaper than SCSI controllers. SCSI has the advantage that one controller can be used to support several different devices and that the device drivers are written for the adaptor, rather than specific devices. With equal drives, IDE will perform better in DOS, but SCSI will be better in multitasking operating systems such as Unix and NT. When buying hard drives look for fast access and data transfer rates, but remember that your machine (CPU, bus, cache etc. ) will affect the overall performance. Do compare benchmarks, but make sure you know what is being tested and under what operating system.
Removable Hard disks • Removable hard disks are now available on many platforms. These have the advantage that they can be relatively easily moved from machine to machine, and have similar access times to normally hard disks, but they do cost much more per MB than optical drives. Examples of these disks include Sy. Quest for the Apple Macintosh, which come in 44, 88 or 200 MB 5. 25" cartridges and 105 or 270 3. 5" cartridges.
CD-ROM n The CD was originally developed to deliver audio data, this standard was called the Red Book standard. CD-ROMs (Compact Disk Read Only Memory) were then developed to store a range of data in digital format. A compact disk can store about 650 M of data, but you cannot write to a CD on most drives (see WORM below). As CDs were originally designed to delivery audio data, a data transfer rate of 150 K/s was sufficient. However, for multimedia applications this is not sufficient, and in order to increase the speed some drives spin at 2, 3, 4 or even 6 times that of a standard audio drive. If you will be accessing much data off a CD, particularly anything requiring random access such as interactive multimedia applications, you should look for at least a 2 x drive. Some drives come with buffers. These may be a 'circular buffer read head'. This buffer helps maintain a steady data flow if the processor stops reading the drive to perform another task. Alternatively the table of contents of the CD may be copied to a buffer. You should also check the amount of CPU used at a given data rate, as this can vary a great deal (<10% to >90%) for drives that claim to have the same data transfer rates. The MPC II specification states that the CPU usage should be less than 40% at a data transfer rate of 150 K/s.
n Data is recorded on a CD at the same density throughout the disk, so the linear velocity of the head is constant, while the angular velocity changes, from about 200 rpm to 530 rpm on a standard drive. This makes it more difficult to access data randomly, as the disk must be accelerated and decelerate, and so access speed is much slower than from a hard disk.
n n Every CD has an index area, which contains track details, and is the first thing read by the drive. If data is added to the CD in several sessions, such as in Kodak's Photo CD, a separate index area is added for each session. A multisession drive is one which will look for these multiple index areas. The ISO-9660 format is the standard file structure for CDs used by computers. It has file naming conventions similar to those of MS-DOS. HFS (Hierarchical Filing System) is used on Apple Macs and allows up to 32 character file names and is the format used on CDs only intended for Macs, though with appropriate drivers Macs can read ISO-9660 discs.
ETC-STORAGE • • CD-RW DVD BLUE-RAY SD-CARD…. . & other similar
DISPLAY • CRT-MONITOR • LCD-PROJECTOR
PRINTER • DOT-MATRIX • LASER • INK-JET
Standards n n Introduction Creating multimedia usually involves a large investment of time and/or resources. Using standards to ensure the longevity and portability of an application is one way of protecting this investment. Some of the standards for the components of multimedia have already been discussed. This section will cover the important standards for providing structure to multimedia applications.
MHEG n n MHEG stands for Multimedia/Hypermedia Expert Group. This is a standard for hypermedia document representation. It is suited to interactive hypermedia applications and many interactive multimedia applications currently available on CDROM. In MHEG the users provide the data file formats, and it provides a hook to identify a variety of types. By supporting various file types it can provide a standardised interchange mechanism independent of file structure. MHEG objects come in four types: input (eg a button), output (eg a graphic), interactive (contains input and output) and hyperobject (input and output with links between). It supports various synchronisation modes for presenting output objects. British Telecom have a demonstration application called MADE.
PREMO n PREMO stands for Presentation Environment for Multimedia Objects, and is an ISO standard being developed to provide a standardised development environment for multimedia applications. It concentrates mainly on presentation techniques. One of the major goals of PREMO was to be able to integrate different media and their presentation techniques into the same framework. Because new techniques are continually being developed and techniques may be application dependent, PREMO uses an object-oriented approach. This means that existing objects may inherit new knowledge. This allows reuse of objects without having to specify entirely new standards. Since many distributed environments are now widespread, the PREMO specification will allow for the implementation of multimedia services over a network.
n n n The conceptual framework has three areas, an object model, the activity of objects and event handling. The object model is based on subtyping and inheritance. Since synchronisation is needed, eg between video and sound track, objects need to be active. Objects may have synchronous, asynchronous or sampled operations. Events and their propagation are also fundamental in synchronisation. A component in PREMO is a collection of object types and non-object data types. Objects within a component are designed for close cooperation. There is a foundation component which is inherited by all other components and provides fundamental functionalities. Foundation objects include: data objects eg 2 D points, producer objects which define the processing of data objects, porter objects for interconnection to external environments and systems, controller objects which coordinate cooperation among other objects, aggregation objects, eg lists, manager objects providing life-cycle services for objects (eg creation) and event-handler objects. PREMO is designed to work with existing and emerging standards. For example it will provide services which can be used to create an MHEG engine - it could be recognised as a PREMO component.
Hy. Time n n Hy. Time, the Hypermedia/Time-based Structuring Language is an ISO standard for the representation of open hypermedia document and is an application of SGML. Like SGML it does not specify the document content, but provides a standard way in which different types of information can be combined. It is not a DTD, but provides guidelines for DTDs. For example, the Standard Music Description Language (SMDL) is an application of Hy. Time allows the association of objects within documents using hyperlinks, and the interrelation of objects in time and space. An object in Hy. Time is unrestricted, it could be video, sound, text etc. A Hy. Time engine recognises Hy. Time documents and processes them. Several commercial Hy. Time engines are currently being developed.
SGML n Standard Generalized Markup Language is an ISO standard markup language for text. It is used to write Document Type Definitions (DTDs) which are descriptions of classes of structured information. Documents are then marked up according to a DTDs do not say how the information in the document should be processed, eg. they do not say how it will appear on paper, this is handled by other programs.
HTML n HTML (hypertext markup language) is a DTD used in the World Wide Web (see 11. 2). Using HTML titles, headings, paragraphs etc are marked. How these appear on the screen depends on the browser (eg Mosaic) used, and how it is configured. Hy. Time (see 7. 4) is based on SGML, and MHEG (see 7. 2) has an SGML encoding. SGML is also being increasingly used in academic publishing and in some commercial hypermedia systems.
ODA n ODA (Open Document Architecture) is a document interchange format for transferring documents between open systems. It supports text and graphics, and an interface to MHEG is planned. Unlike SGML, ODA does concern itself with the physical appearance of the document.
Acrobat n Acrobat is a de facto standard from Adobe for portable document representation which is based on Postscript. Documents are stored in Portable Document Format (PDF), which can be transferred between different systems. Unlike postscript data can be extracted from the document. It also includes support for hypertext links and structured formats. Functionally it is very similar to ODA, and all the formatting structure of the document is retained. Acrobat documents can be viewed on screen or printed using the runtime reader, which is normally distributed with documents.


