# vokoyo

Members

6

0 Neutral

• Rank
Lepton

## Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

1. ## How to modify Adaline Stochastic gradient descent

Dear May I know how to modify my own Python programming so that I will get the same picture as refer to the attached file - Adaline Stochastic gradient descent (I am using the Anaconda Python 3.7) Prayerfully Tron Orino Yeong tcynotebook@yahoo.com 0916643858 from matplotlib.colors import ListedColormap import matplotlib.pyplot as plt import numpy as np from numpy.random import seed import pandas as pd # Stochastic Gradient Descent class SGD(object): def __init__(self, rate = 0.01, niter = 10, shuffle=True, random_state=None): self.rate = rate self.niter = niter self.weight_initialized = False # If True, Shuffles training data every epoch self.shuffle = shuffle # Set random state for shuffling and initializing the weights. if random_state: seed(random_state) def fit(self, X, y): """Fit training data X : Training vectors, X.shape : [#samples, #features] y : Target values, y.shape : [#samples] """ # weights self.initialize_weights(X.shape[1]) # Cost function self.cost = [] for i in range(self.niter): if self.shuffle: X, y = self.shuffle_set(X, y) cost = [] for xi, target in zip(X, y): cost.append(self.update_weights(xi, target)) avg_cost = sum(cost)/len(y) self.cost.append(avg_cost) return self def partial_fit(self, X, y): """Fit training data without reinitializing the weights""" if not self.weight_initialized: self.initialize_weights(X.shape[1]) if y.ravel().shape[0] > 1: for xi, target in zip(X, y): self.update_weights(xi, target) else: self.up return self def shuffle_set(self, X, y): """Shuffle training data""" r = np.random.permutation(len(y)) return X[r], y[r] def initialize_weights(self, m): """Initialize weights to zeros""" self.weight = np.zeros(1 + m) self.weight_initialized = True def update_weights(self, xi, target): """Apply SGD learning rule to update the weights""" output = self.net_input(xi) error = (target - output) self.weight[1:] += self.rate * xi.dot(error) self.weight[0] += self.rate * error cost = 0.5 * error**2 return cost def net_input(self, X): """Calculate net input""" return np.dot(X, self.weight[1:]) + self.weight[0] def activation(self, X): """Compute linear activation""" return self.net_input(X) def predict(self, X): """Return class label after unit step""" return np.where(self.activation(X) >= 0.0, 1, -1) def plot_decision_regions(X, y, classifier, resolution=0.02): # setup marker generator and color map markers = ('s', 'x', 'o', '^', 'v') colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan') cmap = ListedColormap(colors[:len(np.unique(y))]) # plot the decision surface x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1 x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution), np.arange(x2_min, x2_max, resolution)) Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T) Z = Z.reshape(xx1.shape) plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap) plt.xlim(xx1.min(), xx1.max()) plt.ylim(xx2.min(), xx2.max()) # plot class samples for idx, cl in enumerate(np.unique(y)): plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1], alpha=0.8, c=cmap(idx), marker=markers[idx], label=cl) df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data', header=None) y = df.iloc[0:100, 4].values y = np.where(y == 'Iris-setosa', -1, 1) X = df.iloc[0:100, [0, 2]].values # standardize X_std = np.copy(X) X_std[:,0] = (X[:,0] - X[:,0].mean()) / X[:,0].std() X_std[:,1] = (X[:,1] - X[:,1].mean()) / X[:,1].std() sgd1 = SGD(niter=100, rate=0.01, random_state=1) sgd2 = SGD(niter=50, rate=0.01, random_state=1) sgd3 = SGD(niter=10, rate=0.01, random_state=1) sgd1.fit(X_std, y) sgd2.fit(X_std, y) sgd3.fit(X_std, y) plt.plot(range(1, len(sgd1.cost) + 1), sgd1.cost, marker='o', linestyle='oo', label='batch=1') plt.plot(range(1, len(sgd2.cost_) + 1), np.array(sgd2.cost_) / len(y_train), marker='o', linestyle='--', label='batch=2') plt.plot(range(1, len(sgd3.cost_) + 1), np.array(sgd3.cost_) / len(y_train), marker='o', linestyle='xx', label='batch=3') plt.xlabel('Epochs') plt.ylabel('Average Cost') plt.show() Adaline Stochastic gradient descent.pdf Python Stochastic gradient descent.py
2. ## Matlab Programming Language for Cryptography

Dear May I know how to solve the cryptography with Matlab programming language as below - (1) Implement Elgamal Method (2) Implement Elliptic Curve Cryptography method (3) Implement RSA Method (4) Implement Rabin Method Find (a) Prime test (b) Inverse function Please help me by provide your advice and suggestion so that I can improve my computing skills (please see the attached file) Prayerfully Tron Orino Yeong tcynotebook@yahoo.com 0916643858 Matlab Programming Language Cryptography.doc
3. ## How to perform Matlab programming for the biased coin toss simulation

Matlab Programming Method - I get some errors for my coding sample as below (1) x = rand(0,1); if(x<0.25), toss = 0; %Head else toss = 1; %Tail end n = 100; d = 100; hist(y,0:n) x = rand(0,1); if(x<0.5), toss = 0; %Head else toss = 1; %Tail end n = 100; d = 100; hist(y,0:n)
4. ## How to perform Matlab programming for the biased coin toss simulation

Let the bias be the probability of turning up a head and denoted by the parameter q. If we use a coin with the bias specified by q to conduct a coin flipping process d times, the outcome will be a sequence of heads and tails. Then the probability - where nH is the number of heads turned up during d trials. Now using such a simulated coin with q = ½ to conduct the experiments based on a sequence of outcomes generated by the random generator from computer. For example, if the number generated by the random generator is less than 1/2, it assigned to be 0; otherwise, it will be assigned 1. I would like to perform the Matlab programming with certain parameters for the Pattern Recognition (such as Binomial Distribution) as below - (1) d = 100 and n =100 using a simulated coin with q = ¼ and ½. (2) d = 10 and n =1000 using a simulated coin with q = ¼ and ½. (3) d = 100 and n = 1000 using a simulated coin with q = ¼ and ½. Kindly please provide your opinion and suggestion thus I will be able to improve my computing skills. (The result of the execution from program will be drawn into histogram on the PowerPoint) Pattern Recognition Project - Matlab Programming Method.doc