Caffe源码理解1:Blob存储结构与设计 (2)

Caffe中通过上述方式来获取CPU和GPU上的数据区指针,在调用函数时,SyncedMemory会自行判断是否需要同步数据(具体是如何判断的,在讲SyncedMemory时再详细说明),当访问CPU(GPU)侧数据时,如果GPU(CPU)侧数据(可能)更新过,则将数据同步至CPU(GPU)。可参考下面示例代码来理解何时会发生数据同步,示例代码来自Caffe官网。

// Assuming that data are on the CPU initially, and we have a blob. const Dtype* foo; Dtype* bar; foo = blob.gpu_data(); // data copied cpu->gpu. foo = blob.cpu_data(); // no data copied since both have up-to-date contents. bar = blob.mutable_gpu_data(); // no data copied. // ... some operations ... bar = blob.mutable_gpu_data(); // no data copied when we are still on GPU. foo = blob.cpu_data(); // data copied gpu->cpu, since the gpu side has modified the data foo = blob.gpu_data(); // no data copied since both have up-to-date contents bar = blob.mutable_cpu_data(); // still no data copied. bar = blob.mutable_gpu_data(); // data copied cpu->gpu. bar = blob.mutable_cpu_data(); // data copied gpu->cpu.

只要调用了mutable函数,即便没有实际修改数据,再调用另一侧的mutable函数,也会发生数据同步。因此,在明确不修改数据时,尽量调用const函数,只有在操纵数据时才调用mutable函数。

主要成员函数

Blob的主要成员函数有:

基本函数,包括构造函数、set和get类函数、逻辑判断等

Reshape函数,用于设置Blob的shape,分配内存

Update函数,用于在网络训练时更新参数使用,\(data = data - diff\)

Blob运算函数,用于切片统计、求L1范数、L2范数、数乘等

辅助函数,proto导入导出等

下面重点介绍其中主要的成员函数。

template <typename Dtype> class Blob { public: Blob() : data_(), diff_(), count_(0), capacity_(0) {} /// @brief Deprecated; use <code>Blob(const vector<int>& shape)</code>. explicit Blob(const int num, const int channels, const int height, const int width); explicit Blob(const vector<int>& shape); // ...... }

在Blob的构造函数中,会调用Reshape函数,给shape成员变量赋值以及分配初始内存。在Layer::Reshape或者Layer::Forward时,也会调用Reshape函数来设置输出Blob的维度,如果reshape了整个网络的输入Blob,则需要调用Net::Forward或者Net::Reshape来重新确定每一层相关Blob的shape(从bottom到top逐层推算得出)。当Blob size发生改变时,只有在内存不够才会再分配内存,具体代码如下

template <typename Dtype> bool Blob<Dtype>::Reshape(const vector<int>& shape) { CHECK_LE(shape.size(), kMaxBlobAxes); count_ = 1; shape_.resize(shape.size()); if (!shape_data_ || shape_data_->size() < shape.size() * sizeof(int)) { shape_data_.reset(new SyncedMemory(shape.size() * sizeof(int))); } int* shape_data = static_cast<int*>(shape_data_->mutable_cpu_data()); for (int i = 0; i < shape.size(); ++i) { CHECK_GE(shape[i], 0); if (count_ != 0) { CHECK_LE(shape[i], INT_MAX / count_) << "blob size exceeds INT_MAX"; } count_ *= shape[i]; shape_[i] = shape[i]; shape_data[i] = shape[i]; } // 不够时分配内存,原内存会释放(shared_ptr) if (count_ > capacity_) { capacity_ = count_; data_.reset(new SyncedMemory(capacity_ * sizeof(Dtype))); diff_.reset(new SyncedMemory(capacity_ * sizeof(Dtype))); return true; } else { return false; } }

在网络训练阶段,根据损失函数以及反向传播得到的梯度,获得每层参数的更新量diff_,会调用Update函数来更新参数,如下

template <typename Dtype> void Blob<Dtype>::Update() { // We will perform update based on where the data is located. switch (data_->head()) { case SyncedMemory::HEAD_AT_CPU: // perform computation on CPU // data = data - diff, axpy: y = ax + y caffe_axpy<Dtype>(count_, Dtype(-1), static_cast<const Dtype*>(diff_->cpu_data()), static_cast<Dtype*>(data_->mutable_cpu_data())); break; case SyncedMemory::HEAD_AT_GPU: case SyncedMemory::SYNCED: #ifndef CPU_ONLY // perform computation on GPU caffe_gpu_axpy<Dtype>(count_, Dtype(-1), static_cast<const Dtype*>(diff_->gpu_data()), static_cast<Dtype*>(data_->mutable_gpu_data())); #else NO_GPU; #endif break; default: LOG(FATAL) << "Syncedmem not initialized."; } }

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/zywzgw.html