# 论文笔记：DHN, 基于神经网络的hash算法

2017-05-20
cwlseu

## 文章来源

[deep-hashing-network-aaai16]http://ise.thss.tsinghua.edu.cn/~mlong/doc/deep-hashing-network-aaai16.pdf

## 主要贡献

1. a fully-connected hashing layer to generate compact binary hash codes;
2. a pairwise crossentropy loss layer for similarity-preserving learning
3. a pairwise quantization loss for controlling hashing quality

## DataSet

NUS-WIDE1 is a public web image dataset. We follow the settings in (Liu et al. 2011; Lai et al. 2015) and use the subset of 195,834 images that are associated with the 21 most frequent concepts, where each concept consists of at least 5,000 images. CIFAR-10 is a dataset containing 60,000 color images in 10 classes, and each class has 6,000 images in size 32×32. Flickr3 consists of 25,000 images collected from Flickr, where each image is labeled with one of the 38 semantic concepts.

## 实验参数

We implement the DHN model based on the open-source Caffe framework (Jia et al. 2014). We employ the AlexNet architecture, finetune convolutional layers conv1–conv5 and fully-connected layers fc6–fc7 that were copied from the pre-trained model, and train hashing layer fch, all via back-propagation. As the fch layer is trained from scratch, we set its learning rate to be 10 times that of the lower layers. We use the mini-batch stochastic gradient descent (SGD) with 0.9 momentum and the learning rate annealing strategy implemented in Caffe, and cross-validate the learning rate from $10^{-5}$ to $10^{-2}$ with a multiplicative step-size 10. We choose the quantization penalty parameter λ by cross-validation from $10^{−5}$ to 100 with a multiplicative step-size 10. We fix the mini-batch size of images as 64 and the weight decay parameter as 0.0005.

## 结果

### 损失函数的有效性

We investigate several variants of DHN: DHN-B is the DHN variant without binarization (h ← sgn(zl) not performed), which may serve as an upper bound of performance. DHNQ is the DHN variant without the quantization loss (λ = 0). DHN-E is the DHN variant using widely-adopted pairwise quared loss instead of the pairwise cross-entropy loss.

## 我的想法

1. 损失函数中加入量化损失函数的使用是有效的，
2. 量化损失函数的问题转化中可以学到一个绝对值的近似函数logcosh(x)函数。

## 后续论文

[1]. [Liong_Deep_Hashing_for_2015_CVPR_paper]http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Liong_Deep_Hashing_for_2015_CVPR_paper.pdf

[2]. [Simultaneous Feature Learning and Hash Coding with Deep Neural Networks]https://arxiv.org/pdf/1504.03410.pdf

|版权声明：本文为博主原创文章，未经博主允许不得转载。

Content