A method of content
adaptive encoding video is disclosed. The method comprises segmenting video content into segments based on predefined classifications or models. Examples of such classifications include action scenes, slow scenes, low or high detail scenes, and brightness of the scenes. Based on the segment classifications, each segment is encoded with a different
encoder chosen from a plurality of encoders. Each
encoder is associated with a model. The chosen
encoder is particularly suited to encoding the unique
subject matter of the segment. The coded bit-
stream for each segment includes information regarding which encoder was used to
encode that segment. A matching decoder of a plurality of decoders is chosen using the information in the coded
bitstream to decode each segment using a decoder suited for the classification or model of the segment. If scenes exist which do not fall in a predefined classification, or where classification is more difficult based on the scene content, these scenes are segmented, coded and decoded using a generic coder and decoder.