This page is a summary of the current proposals & thinking at a high level for the network architecture of the MediaStation player application. It is initially a summary of an email conversation between HugoMills and NeilFerguson, and should be treated as a straw man, but will (hopefully) come to reflect the decisions on architecture made during wider discussions.

If you edit this page, please indicate what you changed, and mark additions or major edits with your name or initials.

Consider a bunch of machines on a network. Each machine can provide a number of services: "real" media sources (tuners, etc); data stores (for recording and playback); output devices (TVs, audio systems). There is a single "multiplexer" or MCP (Master Control Program) running on each machine in the system which handles the delivery of [broadcast?] packets to the individual services.

At present, it seems like a good idea to run at least the communication and negotiation part of each service as a separate thread within the MCP. These threads will spawn off other programs (e.g. mplayer, dvbstream) to handle the hard bits, like actually displaying a video on the screen (or whatever).

The system is controlled by one or more control applications (some of which may have user interfaces of various types). These negotiate with services to provide the requested operation(s). In addition, services may negotiate with each other when necessary (see "Service migration", below).


Each service (whether it's a data producer or a data consumer or both) can engage in a number of different transport methods. So, for example, all services should be able to handle multicast IP packet streams (in, out or in+out as appropriate). Some services will be able to handle local, specialised transports (e.g. DVB card receiving and playing a DVB stream – the data doesn't even hit the PCI bus); some will handle other forms of local or networked transport (e.g. reading a file on the local machine).

On their own, these services will do nothing until told to. Each service should be able to do its own task scheduling, and know when it's been reserved (and what work it's been asked to do, and its own capabilities -- for example, a single DVB-T tuner can receive BBC1 and BBC2 at the same time, but not BBC1 and ITV2). Services should have some concept of their own quality, too: a 64kbit mp3 on a storage service is going to be lower quality than the 196kbit ogg of the same track, and that will be (nominally) lower quality than the CD drive.

Service clashes & locking

Some services may share the same hardware – for example, an audio-playing service and a video-playing service on the same machine will attempt to use the same audio hardware. Lets assume for example that you have told a box with DAB that you'd like to listen to Radio 4 (or perhaps even a decent station). You then hit the Watch Top Gear button on your remote - what would happen? Do you fail because the audio device is in use? Do you stop the existing DAB audio stream - and if you do then how do you know you're 'allowed' to?


As well as the service providers, there are control applications. These are the user-interfaces (UIs) to the network, and handle the control of the system. When a UI starts up, it broadcasts a query to the network, asking for all services to reply to it. It gets back a complete list of the currently-running services, and their types/capabilities and locations.

If a service is shut down cleanly (such as a machine shutting down), it should broadcast a "shutting down" message, so that active UIs can remove it from the list of active services. Similarly, a service starting should broadcast its presence. Each active service (i.e. those doing stuff) should know who its clients or providers are, so that if all of them shut down, it can stop providing the service.

When someone uses a UI to (say) play a DAB broadcast through a stereo system, the UI broadcasts a request for a DAB receiver capable of playing Radio 4 now, and a similar request for a stereo system capable of playing music now a specific location. It then looks at the responses, which will contain for each service a list of the capabilities of the service (such as available times), available transports, and transport-dependent information (such as location, bandwidth, free space, whatever). With the information from the available source transports and sink transports, the UI can pick the "least cost" source/sink combination and tell the two services in question to do their thing, negotiating (e.g.) an appropriate multicast network and port.

Service migration

Ideally, a service being shut down should attempt to hand off its obligations (both current and scheduled) to another service. This will involve another negotiation phase, where the service being shut down will negotiate with the other services for someone to take up its task. Service migration may also happen when (for example) a recording service becomes full and it needs to switch recording to another place.

This may well cause problems for things like signal synchronisation and switching, but I reckon that's something we can deal with later…

As long as you have a guaranteed overlap then it shouldn't be too much of a problem to splice things together. It would be nice to offer on-demand-combining, overnight combining (i.e. so its ready the next day wherever there's enough space) and best of all – on-the-fly combining. I guess that last one would take quite a lot of read-ahead and some fairly fast processing to guarantee such a service, though. Perhaps that particular facility is best left to version 2. Otherwise though, I think this would be an excellent and much-used feature.

Other notes and considerations

Centralised vs decentralised

This architecture can be centralised, with a named machine watching all the services come & go, and handling the negotiation and despatch of the jobs (either for scheduling by the services themselves as in the above model, or scheduled centrally). It can be made non-dynamic by hard-coding the locations of the services in each UI's configuration. It can be made semi-centralised by having each machine involved capable of being the centre, and the centres deciding which one actually takes control by electing a leader. None of these options are architecturally as nice, though, to my mind.

We could go one step further, and have the individual services negotiate with each other to find an optimal service delivery contract, and send their agreed contract back to the requesting UI. That probably gets a little harier in the network protocol, though, and probably extends into the field of academic research into agents.

Other kinds of hardware capabilities

If you want to record London's 104.9 XFM at 9pm and ALSO record Top Gear, but you have just a single box, can you do that? Can we write quick enough to save the two streams at once? Even if we're not experiencing a conflict of sinks then we'll have to make sure we can actually save the data quick enough. Oh dear; what if somebody puts a slow hard drive in, or one which doesn't have DMA enabled for some reason? Will we need to benchmark a machine on installation and when hardware appears to change to take this into account too?