FIG. 7 shows a camera
system (700) that operates to time-stamp video content captured from multiple cameras (740) relative to a recorded and time-synchronized location of a portable tracking unit (722). The position of the cameras (740) is known to the
system. Based on the time and position data for each uniquely identifiable tracking unit, an editing suite (770) automatically compiles a
composite video made up from time-spliced video segments from the various cameras. Video or still images captured by the cameras (740)are cross-referenced against the
client address stored in
database (760) and related to the assigned, uniquely identifiable tracking unit (722). A
server (750) is arranged to use the
client address to send reminder messages, which reminder messages may include selected images taken by the
composite video. Alternatively, a
client (720) can use the client address to access the
database and view the
composite video. In the event that the client (720) wants to receive a fair copy of the composite video, the
server (750) is arranged to process the request and send the composite video to the client. Streaming of multiple video feeds from different cameras that each
encode synchronized time allows cross-referencing of stored client-specific data and, ultimately, the
assembly of the
resultant composite video that reflects a timely succession of events having direct relevant to the client (720).