UFLDL教程笔记及练习答案三(Softmax回归与自我学习)

习题答案:

M = theta*data; % M的每一列就是一个样本所对应的thta*data(:, i)的值M = bsxfun(@minus, M, max(M, [],1)); %减去每列的最大值以防止溢出M = exp(M); % p = bsxfun(@rdivide, M, sum(M)); %得到概率矩阵cost = -1/numCases .* sum(groundTruth(:)'*log(p(:))) + lambda/2 *sum(theta(:).^2); % cost functionthetagrad = -1/numCases .* (groundTruth – p) * data' + lambda * theta;% grad

(3)gradient check ———代码已给

(5)测试

[nop, pred] = max(theta * data); % nop存放的是每列的最大值,pred存放的是该最大值所在的行号

2:自我学习和无监督特征学习

注意:这里没有对数据进行白化的预处理,有时间会做下,比对下效果。

习题答案——最后的准确率是98.189306%

(1) % Find opttheta by running thesparse autoencoder on

%unlabeled Training Images

options.Method = 'lbfgs'; % Here, we use L-BFGS to optimize our cost% function. Generally, for minFunc to work, you% need a function pointer with two outputs: the% function value and the gradient. In our problem,% sparseAutoencoderCost.m satisfies this.options.maxIter = 400; % Maximum number of iterations of L-BFGS to run options.display = 'on';[opttheta, cost] = minFunc( @(p) sparseAutoencoderCost(p, …%通过L-BFGS得到最佳的cost与gradientinputSize, hiddenSize, …lambda, sparsityParam, …beta, unlabeledData), …theta, options);

(2) Extract Features from the Supervised Dataset

b1 = repmat(b1, 1, size(data, 2));Z1 = W1*data + b1;activation = sigmoid(Z1);

(3) %% STEP 4: Train the softmax classifier

lambda = 1e-4;%% lambda设置的过大 效果没有达到98%numClasses = numLabels;softmaxModel = softmaxTrain(hiddenSize, numClasses, lambda, …%% 注意这里不再是trainData,,而是在稀疏自编码算法中提取的第二层结点的激励值作为新的特征值trainFeatures, trainLabels, options);

(4)测试

[pred] = softmaxPredict(softmaxModel, testFeatures); %%% 对结果进行预测

有本钱耍个性,离开睁眼闭眼看见的城市,逃离身边的纷纷扰扰,

UFLDL教程笔记及练习答案三(Softmax回归与自我学习)

相关文章:

你感兴趣的文章:

标签云: