Visual sentiment analysis is attracting more and more attention with the increasing tendency to express emotions through visual contents. Recentalgorithms in Convolutional Neural Networks (CNNs) considerably advance the emotion classification, which aims to distinguish differences among emotional categories and assigns a singledominant label to each image. However, the task is inherently ambiguous since an image usuallyevokes multiple emotions and its annotation varies from person to person. In this work, we address the problem via label distribution learning and develop a multi-task deep framework by jointly optimizing classification and distribution prediction. While the proposed method prefers to the distribution datasets with annotations of different voters, the majority voting scheme is widely adopted as the groundtruth in this area, and few dataset has providedmultiple affective labels. Hence, we further exploit two weak forms of prior knowledge, which are expressedas similarity information between labels, togenerate emotional distribution for each category. The experiments conducted on both distribution datasets, i.e. Emotion6, Flickr LDL, Twitter LDL, and the largest single label dataset, i.e. Flickr and Instagram, demonstrate the proposed method outperforms the state-of-the-art approaches.