(TBC) > The issue was discovered Feb, 2022. However I haven'tstarted to summarize the experiences or updated the current blog for a 5months due to the covid waves. Hopefully it's not too late to pick it upin the last month of 1st half year.
22 Jun, 2022: Initial post draft.
]]>Jan09, 2022: Initial post draft.
]]>When building k8s apps, e.g. a reverse proxy to route APIs in given business domain, helm chart is a convenient choice to build and ship the apps. Sometimes if it's a prototype, your teammates or yourself would think of running it in local for quick verification in local cluster or a better debug way. This post described the tips to build an app running both in and out of the cluster
In short, the app is expected to run in local for debug and quick demo using but it shall also be delivered in helm chart and deployed in realistic environment for a further step prototype with downstream services.
In my practice, the kube client is usually a ClientSet with CRD schemed and for convenience we could keep the k8s core scheme client together with CRD clientset as below. The kube client could be simply declared in the app sub package and injected from main.go
.
type KubeClientSet struct { |
Then in the NewKubeClient()
when the clientset is initialized, the host name is filled to adapt in and out of cluster cases. Where the flag InCluster
could be resolved as os.Getenv("KUBERNETES_SERVICE_HOST") != ""
. If an app is running in kubernetes cluster container and it's not intended to prevented to visit the cluster host API, the KUBERNETES_SERVICE_HOST
shouldn't be empty.
Another reminder is k8s runtime pkg also defines kubeconfig
CLI flag, so readers could use flag.Lookup()
to check it first. func GetKubeConfig() string {
var kubeconfig string
if !InCluster() {
//Running out of cluster
homeDir, _ := os.UserHomeDir()
defaultKc := homeDir + "/.kube/config"
// k8s-sig runtime also defines kubeconfig flag. It might be removed in later version.
kcFlag := flag.Lookup("kubeconfig")
if kcFlag == nil {
flag.StringVar(&kubeconfig, "kubeconfig", defaultKc, "path to Kubernetes config file")
kcFlag = flag.Lookup("kubeconfig")
}
flag.Parse()
kubeconfig = kcFlag.Value.String()
if kubeconfig == "" {
kubeconfig = defaultKc
}
log.Printf("Loading kubeconfig from %s", kubeconfig)
} else {
log.Printf("Running in cluster..")
}
return kubeconfig
}
At last, the NewKubeClient()
shall check if it's running out of cluster, fill the Host
field with the kube-proxy URL, e.g. localhost:8888
if the kube-proxy for local test is kubectl proxy --port=8888
. Otherwise, set the Host
to its kubeconfig.Host
.
When a targeted service is located with namespace and service name, the URL could be built for in and out of cluster cases.
If it's running in cluster, the URL to a service is <svc_name> + "." + <svc_ns> + ".svc." + CORE_DNS_HOST
. The coreDNS host usually is cluster.local
but it's up to cluster configuration.
Otherwise, if it's running out of cluster and connecting to cluster API via kube-proxy, the URL is Host + "/api/v1/namespaces/" + <svc_ns> + "/services/" + <svc_name> + "<:port_name>/proxy"
. HOST
is the host value resolved in NewKubeClient()
method.
The tricky point is the <:port_name>
. If targeted port of the service is a named port, the port name is required.
Assume the RBAC is configured well, the k8s app is able to run out of cluster or be packaged up in a helm chart to be deployed in a cluster. To run it out of cluster for a quick demo, users need to launch kube-proxy first to expose the service from its running node.
Oct, 2021: Initial post draft.
]]>The issue starts with an uninitialized slice in a regular PR review. For example, take a sample RESTful handler for listing operation ("get" on collection HTTP endpoint) and the code just use a defined slice but it is uninitialized by default. I believe it's tested in normal path with results, either in UT or SIT. However, when we take a corner case if no resource is found from the query, the RESTful shall return {"items":[]}
.
var s []string |
In above simplified sample, the uninitialized slice is actually encoded to null
. In this case, if someResult
is empty, the output will be res = {"Items":null}
. It's sometimes unfriendly to downstream APIs.
There are three reference types in golang: slice, chan and map. The default value of them will be nil
if the vars are not initialized. Among them slice and map are widely used in models for marshalling and unmarshalling. To encode the empty slice or map to an empty collection, the vars could be initialized before using. Golang provides sugar for slice, map and chan to use make
to get a pointer to initialized value. Here the pointer is actually a reference though there is no such reference type explicitly. It means when passing a map in to a method as an argument, the method could update the value via this var and the var address is same as the input var. The later point is different when passing a point of some var. If *int
is passed as an argument, the *int
itself is copied so the address of the pointer itself is different within the method but the method could update the values this pointer is pointing to.
var s []string = make([]string, 0) // or s:= []string{} |
If the query result is empty, it will still print res = {"Items":[]}
. Same tips on map
, an uninitialized reference type will be marshalled to "null" not an empty value.
TBC
Oct, 2021: Initial post draft.
]]>Declarative APIs or declarative models are popular in k8s cloud native apps. This post shares the experience and thoughts on learning and building a declarative interface prototype.
A declarative software model, usually compared to imperative interface, includes a concept model of northbound scheme or DSL which describes a desired state of system and a southbound provider as a controller of async reconciliations.
A typical example is the Terraform with providers.
Reports show around 1/5 outage incidents occur because of human error. In an imperative model, users need to consider each detail API request, version, phased validation and usually programmatically manage the intermediate states in sequence or async ways. From another point of view, the user end implements a work-flow implicitly. It increases the chances to introduce human errors due to the conflicts between humanity and complexity.
(TBC)
With declarative interface, users don't deal with number of versions for all the APIs. There could still be version of scheme, which is an universal version or grouped versions in a relatively simpler way. The backend services could evolve without user end impacts.
(TBC)
(TBC)
Nov24, 2021: Continue drafting the model overview section. Mar15, 2021: Initial post draft.
]]>OCI (Oracle Cloud Infrastructure) launched free learning and certifcate exam promotion in 2021. I took the exam on May 10, 2021 and acquired the score 89% at the firs attempt. A OCI Certificate Badge will be available within couple of steps in a following email when you knocked the cert exam - usually within 2 working days. Here are some tips shared with readers.
Due to pandemic, learners can choose to take the exam at home. I registered with Pearson and took the exam with pearson apps on my MacBook. A kind reminder to to run the PC scan prior to exam date to find if there is any violations - candidates need to remove the violated apps to keep a compliant environment. Also it's suggested to tidy up the room for your exam. Personally I will suggest to temporarily move all books, screens, and all things with printed letters out of room and it's better to give yourself a couple of days to get used to it.
The exam style is similar to AWS cert exam. However, the weight among the exam guideline mentioned areas are not described with percentage. Some of the topics are not available to practice with OCI free tier account. It's not required to practice them but if it's possible, getting some credit to play around will be helpful.
The last chapter of Oracle University free training offers a sample exam. It's suggested to use the sample exam to get familiar with the multiple-choice exam style and pace. The author suggests readers to keep 1 min for 1 problem and reminder yourself don't spend much time on a single problem.
Security is a significant feature and a saling point of OCI. Readers are suggested to take special care on the availability, integrity and confidentiality, e.g. DDoS attack prevention and SSL certificate configurations.
(TBC)
PS: The promotion of free cert exam is extended to end of year. Hopefully learner around the world could make a better use of time in pandemic while WFH(work-from-home).
May10, 2021: Took OCI Architect Associate Cert Exam and acquired the cert
]]>The picture was taken on Otaki Kite Festival 2020. The little penguin, teddy bear and other little buddies are driving the virus(played by a puffer fish kite) out of the their home land. 2 months later, the country-wide lock-down is announced.
I started new job as Senior Cloud Native Developer to develop and maintain the devops pipelines and stand cloud stacks for global internal partners from Dec 2019. The first season was full of fun. We spent a happy holiday back to Nanjing, visited relatives and friends, enjoyed the Shanghai Disney tour. I walked through the pipelines, rehearsalled small innovation, developed tools to streamline the stack standing works and glad to see the team liked it - on the united objective to speed up cloud stack delivery. I also flied to Sydney for an on-site training.
Revised 6 months later
The covid-19 suddenly changed many things. We stayed home together to fight the pandemic. We placed orders online - I still remember in the first few weeks the countdown delivery slots were only available on Mon and Wed certain time, it became fully booked quickly. We struggled to learn how do do home schooling - which was hard for young kids to focus on zoon screen and read interested information from the unfamiliar atomsphere of anxiety. I confess I didn't realize human beings have such a strong adaptability to get ourselves used to new life pattern quietly.
One of the facts is there is no blog update from April 2020 to April 2021. We were not always locked down - actually in most of time there were no realistic restrictions to domestic activities. Material shortage was resolved in 2-3 months. I had the chance to design and implemnet the orchestration core python package and it worked beyond expectations. Then I joined another project where my initiatives were implemented to equip a data-model to Etcd data and off-load the network-flow applications. As long as working from home, I could see a high productivity. In life balance, we spent a lovely Christmas holiday in Hawksbay. I read a list of books via eReader on Python, Golang, k8s, walked thru blogcasts like blockchain and history topics. I even learned how to build mobile apps with React-Native and built new toys with my own firebase account. During the phase, I created an online group of badminton - it has 50+ local members now and we play games each Sat. However, there were also many things happened which we could only absorb by ourselves with family supports. Our behavior patterns are changed, slowly and unnoticeably in the background.
But when I take a retrospective, I could feel lackness of participation. It's not straightly related to productivity or innovation - it's an engagement with buddies in multiple dimensions, local community, technical meetups, friends and family. When we reduce face-to-face social activities, the online activities are also impacted. In facts I have a better productivity in work and self-studies but less chance to summarize the progressed topics. The action plan is straight - looking around and reaching out to people.
I draft a short list of topics and start to take periodical time to have a look around, summarize and share the thoughts and feelings. Buddies know I've taken systemtical badminton training for a while, yes - badminton for sure is one of them. I will share the lessons and experiences in movement correction, footworks, power stroke, jump smash and how to do self-training in yard. Large size event is still restricted - for example the Cloud Native Summit NZ has been postponed - but small get-gether within a few people is still okay for the time being - there are plenty of chances to join local associate catch-ups and local golang, cloud, rust workshops.
The list includes below topics and targeted to 50% coverage in 12 months
Python Programming Tips: generic data model, borg pattern, commander pattern
Golang Programming Topics: k8s operator programming, CRD object model, declarative API design, k8s ingress management
Devops Topics: migrating apps in helm chart, charts for DevOps, open application model
Badminton Topics: Footworks, Power Stroke, Smash Practice Points, attack patterns in mixed double games
(Pending): Sharing the stories in India and US travels of early years
(Pending): Wellington attractive locations, Local technical communities
Hi 2021, a late greeting 🤝
Nov21, 2021: Re-post the image and re-deploy with updated hexo config. May01, 2021: Initial post draft.
]]>"Borg" are a hive-mind collective - Star Trek. The term describes a pattern of shared information among multiple instances :_)
In most cases of app state sharing, the design really cares about a set of states can be share among components not whether it's one single object in runtime. Python Borg Pattern is easier and flexible to create shared states for other packages to access and use. It's helpful in configuration management, global IDs or session reusing.
(TBD)
Apr19, 2017: Initial post draft.
]]>The multi-stage supports in docker image building was introduced with Docker v17.05 in 2017. This post summarizes the practical points which can benefit the development experience, secure the data and reduce the docker image size.
The multi-stage docker image build, in my practices, shows a way to resolve thress issues.
Data Security Supports: If there are previous steps to download the source, setting up the toolchain then there is a risk to leak information via incomplete deletion or to introudce more vulnerability by leaving the artifacts building toolchain on product images. Some cloud solution vendor offers special solution to build artifacts with homogeneous images and only delivery final artifacts in last image.
Reducing Docker Image Size: Docker image build generate new layer on each command and the AUFS applies Lazy Deletion
. If caches and temporary files are removed in second command, the size wouldn't be reduced from volume but just those files are marked as deleted in the new layer. As pointed by many Dockerfile Best Practice
or guidelines, there are recommended tricky steps to keep the dotnet build
, yun install
or apt-get install
followed by purges. Multi-stage build could resolve it by copying artifacts from another stage.
Easy to Maintain Dockerfile: The above two issues could be mitigated with well configured multiple images in a procedure to deliver the final artifacts only in last image. However, the Dockerfiles would have dependencies and the Dockerfile would be hard to maintain.
Multi-stage build was introduced to divide the docker image build into multiple stages which can pass artifacts from one to another and eventually ship the final artifacts in the last stage.
Take an example of upgrading googl-chrome browser version. The base image is cypress/browers
.
The Dockefile is straight through: FROM cypress/browsers:node11.13.0-chrome73
ENV TZ=Pacific/Auckland
RUN apt-get update && apt-get install google-chrome-stable -y && \
google-chrome --version
From the logs, it shows chrome browser v78 replaced original v73. To check the image size, either docker images
with labels/tags to show a summary on matched images or docker inspect
command can show image details.
Then docker inspect cypress3-chrome-updated-without-purge | jq '.[0].Size'
would show the image szie 1520046216
in Bytes. Alternatively, the docker command native JSON
filer could be applied to get the same result on given image docker inspect cypress3-chrome-updated-without-purge --format='\{\{.Size\}\}'
.
Apply the recommended hacks to clean the cache on every command:
FROM cypress/browsers:node11.13.0-chrome73 |
This way the image size is reduced to 1503569359 Bytes. 200MB caches are removed from the same layer to upgrade chrome browser.
> docker inspect cypress3-chrome-updated-cache-purged --format='{{.Size}}' |
Obviously the Dockerfile is a bit harder to maintain because each step was appended with all kinds of purge commands. If there is no convenient way to purge right away or it is difficult to maintain such code in one command, a script might be drafted and copied to the intermediate layers to support such a command in one step.
With a quick check, the google chrome is maintained in /opt/google/chrome
folder and as an image for experiments, it is okay not to consider apt-get checksums. The new Dockerfile is drafted as below:
FROM cypress/browsers:node11.13.0-chrome73 as stage1 |
The first image is also homogeneous and it just contribute the google-chrome binary files. Then the final image copied
the binaries directly to corresponding folder.
Test the google-chrome version in cli.
> docker run -it cypress3-chrome-updated-multi-stages google-chrome --version |
Check the image size and it shows even a smaller size than that from Dockerfile to purge apt-get system caches because this solution only copies the required folder.
docker inspect cypress3-chrome-updated-multi-stages --format='\{\{\.Size\}\}'
reports size as 1501127204
Bytes.
Less information left on image: No need to keep addition YUM repos if it is an RHEL image, no extra keys left, more important, no development phase configuration or source code left on image.
Smaller size: Since copying the artifacts is the clean way to add only requested files to final image, the size is only increased for neccessary.
Building way | Size |
---|---|
Install pkg from apt-get | 1520046216 |
Install pkg and purge | 1503569359 |
Copying binaries from previous stage | 1501127204 |
A better chance to apply multi-stage docker image building is to support multi-stage compilation. One typical example is to upgrade git version on an RHEL Jenkins Slave Image. RHEL official YUM repo only supplies the old version of git client. Which doesn't support the advanced functions as Dotnet Core NuGet operations. In this case, the solution is to download git source code and install gcc toolchain to build it locally. Without multi-stage image build, the procedure would request a cross compilation on source code in separate script or build it on docker image directly for homogeneous arch. Multi-stage docker image build can maintain the steps in one single Dockerfile.
On the other side, the sample in this post is not an apt example. If only the chrome binary executable files under /opt/google/chrome are updated staightly, the /etc/alternative would still point to chrome-stable binary but the apt pkg management DB still regard it as the original version v73, not the current version and the dependencies check won't cover v78 neither. Like Sun Solaris package system, it is possible to overwrite the package DB but which would request one more command and consequently a new docker image layer. The apt package DB is located at /var/lib/apt/lists.
So apply multi-stage image build for source code compilation especially multi-stage compilation, decompressed binary package as Node.js.
(TBC)
]]>(WIP)
This was the first issue I spent a big effort this year to realize that popular technical stacks were still not ready to adapt themselves to container environment. Typically if a managed system reads the mount point /proc/self/mountinfo
as on regular Linux platform but not the /proc/self/cgroup
, the memory limits are not observable from the memory management.
The github link is https://github.com/dotnet/coreclr/issues/13489. The fix includes https://github.com/dotnet/coreclr/pull/13488 and https://github.com/dotnet/coreclr/pull/15297 to check cgroup resource limits and expose docker processor counts to CLR environments.
The phenomenon was dotnet core pod restarted more than 200 times per day and openshift monitor portal showed OOM Killer
in event description. It was lukcy the production environment deployed with replica number 4 so fintech service was not interrupted. To debug this issue, an image of LLDB on dotnet core was created to detect threading model and high memory blocks (https://maxwu.me/2019/04/15/Debug-dotnet-core-with-LLDB-on-RHEL-Image/). Per my observation, the high runners are Newtonsoft JSON entities because lots of memory were consumed by dotnet string buffers.
This issue is actually a JVM configuration problem. It was dicovered when Jenkins pod ran slowly in one day and Jenkins pod was observed to restart within 72hr everytime. Our Pipeline was a typical Jenkins groovy Pipeline and it communicated to two kinds of slaves: (1) the dynamical jenkins slave created on demand, which were based on different slave images with required technical stack; (2) windows slaves for specific tasks which could only complete by Windows nodes for time being.
(TBC)
Cypress is the in browser javascript UI test framework I picked for team last year (2018) when migrated from host based Selenium to Pipeline.
(TBC)
Go developers could use runtime.GOMAXPROCS()
to set the threads limit of go runtime (number of P of MPGmodel) or read it when setting value is 0
. From golang v1.5 the default value is the core number of CPU. However, when running in a container, the go runtime still read the core numbers from host, not the container resource limit.
There is a workaround from Uber automaxprocs
lib. By import _ "go.uber.org/automaxprocs"
, the automaxprocs
lib initializer will read the core number from container cgroup
limit and set the GOMAXPROCS
automatically.
(TBC)
May 01, 2021: Add golang routing burst issue Nov 12, 2019: Initial post with intro part and the outline.
]]>As a pythonist on system level, I built several my experience with Java Web Frameworks are mostly on structs MVC as an UI backend to interact with JQuery to present the status and management from message security gateway products. However, according to the reality of circumstance, framework seems much more impressive than computer science and ways of thoughts.
It's the time to take a bite on Springboot and see what's inside.
In brief, Jetbrain IntelliJ community version on Mac. I used to program Python on PyCharm and IntelliJ shares similar features on Java IDE.
Java toolchain will be organized in Gradle. Maven is an alternative which I used in previous test automation tools. However, gradle is graceful and brief.
Eventually the service will be wrapped in kubernetes pod but it is not the first step.
Springboot web site offers curl
interface to generate a demo project to start from. Visit https://start.spring.io on cli tool curl will show the manual on how to generate springboot scaffold.
curl https://start.spring.io |
I chose a demo web project using Java 8. Which means, a wrapped dependency of spring-boot-starter-web
. Springboo will interrepte it to real dependencies.
curl https://start.spring.io/starter.zip -d dependencies=web -d javaVersion=8 -d type=gradle-project -o demo.zip
Alternatively, open IntelliJ menu to "New Project" will also provide options to visit start.spring.io
within the IDE UI to create project scaffold.
When importing the scaffold project to IntelliJ, a run configuration with main class on the DemoApplication
, where the annotation @SpringBootApplication
is applied, will be created. Run the configuration "DemoApplication" will launch Springboo web app in couple of seconds. However, visiting localhost:8080
will still return an error page since there is nothing to respond.
For the gradle configured project, the IntelliJ would spend a bit while to download gradle dependencies.
A simple controller class is added to respond string content to path /
. Thanks to IntelliJ, the annotations are auto-completed. Key points here are "GetMapping" annotation to specify the path of /
and "RespenseBody" annotation t
package com.example.demo; |
Like npm run
but more verbose than the node.js cmd, luanch ./gradlew tasks
or directly run gradle tasks
in root folder of project would print out a task list which can be run by gradle plugin. If it is the first time running gradle
, the gradle daemon shall be luanched and basic environment/dependencies checks would be performed first.
gradlew
and gradle.bat
are artifacts generated by gradle warpper
task which empower environments with gradle preinstalled to run gradle toolchain commands.
After updating the above controller class, run gradle bootRun
would also run the springboot application to server localhost:8080
. In the browser, the simple content "Home" is fetched and rendered.
As usual there are multiple ways to build docker images as first step to containerize the app. Thanks to the gradle community com.palantir.docker
plugin is picked up in this demo project.
The gradle pluin could be applied in build.script DSL or plugin DSL. This experiment applies the plugin DSL and build docker image with Dockerfile rather than docker plugin DSL to reuse author's existing Dockerfile experiences for now.
Insert this plugin reference to build.gradle id 'com.palantir.docker' version '0.22.1'
.
docker
TaskThe task is defined as below:
docker { |
To keep the image slim, alpine jdk8 image is picked as base image.
FROM openjdk:8-jdk-alpine |
With above Dockerfile and the docker
task inserted to gradle.build script, run gradle docker
would (re)build the app image with dependencies. Quickly test the docker image by launching it locally, docker run -p 8080:8080 -t com.example/demo
. Then open browser on URL http://localhost:8080/
the same contents are responded "Home".
(To be continued)
Sep 22, 2019: Configuration and start a new springboot app. Sep 28, 2019:
]]>After a few weeks sorting up and working with Python3 on my Mac Book Pro, the brew update failed to update and reported an error of aws command not found.
> brew update |
The solution is straight through. Since aws cli is not found, it is a missed step in migrating Mac development environment from Python2 to Python3 -- the corresponding aws cli is not installed well to Python3.
My python environment is managed via PyEnv. When a new python version is installed, the upstream depeendencies are not maintained via requirement.txt so it needs a manual step to re-enable awscli. >pip3 install awscli --upgrade
Usually in Jenkins Pipeline or SAAS DevOps infrastructure, the code coverage check is implemented with Cobertura
or cloud service Coverage
.
As described in previous posts, here are samples of Coverage
service and on-premise Cobertura
.
The coverage check is implemented with metrics and thresholds, in other wors, the score of code coverage on current baseline. This won't be a problem when the repo has an ideal coverage leve.
For example, if the threshold is set to 95% on lines, functions and branches thress metrics, when the change breaks the threshold, the coverage check will fail.
On an legacy repo, this would potentially be a problem with a low coverage level. For an example, if the repo has 45% overall lines coverage. On one of the feature branch, the code change lower down some source code coverage by accidently introduced a wrong condition in Jest. But the feature branch aslo introduced a batch of new source and keep 100% on these new added source filed. Therefore, it is possible to see an increase in Total Coverage
. And due to a lower level of Cobertura
threshold on existing code, this cannot be discovered by the coverage check at all. The feature branch can be merged to master branch with successful coverage endorsement.
Above is a real case in coverage overall check with one of my projects.
Since the project mentioned above is a node.js front-end app, the coverage measurement is implemented with Jest coverage. Underneath the jest framework, istanbul
is the code coverage lib. This triggered me to seek a way to compare the coverage result files from the source branch to target branch.
The solution could rely on JsonDiff lib to compare the coverage between two branches and fail when there is any nodes on source tree has decrease on coverage unless the leave nodes (file-line, function, branch path) are removed from source branch.
Here the term leave node
depends on which coverage metrics are selected. It could be one or more from lines, functions and branches. The three coverage metrics are supported by istanbul.
The first condition can be satisfied by applying an npm lib istabul-diff
. Which is based on jsondiffpath
lib to compare the increments between source coverage summary and target (existing) one.
The second condition would be resolved with traditional way -- Artifactory
. On Jenkins Pipeline, a goovy closure will be defined to push coverage-summary JSON to artifactory if current BUILD
passes and it is on master branch.
So the artifactory specific PATH will only keep a latest copy of master branch coverage result (in JSON format).
When Pipeline determines the build is on a feature branch, it will automatically download the master coverage summary from Artifactory and apply istanbul-diff to find if there is any loss on coverage but will accept all the positive (incremental) coverage.
To utilize istanbul-diff tool, istanbul reporter json-summary
is required. By default Jest would apply parameter ["json", "lcov", "text", "clover"]
(refer to Jest Doc)
So the package.json could be updated as:
{ |
The author just verified the idea with a rough react sample but haven't tested the solution with prototype on pipeline yet. Here are actions to fulfill and confirm:
Implement the solution above in an POC branch on pipeline definition file.
Take special care to verify when leave nodes are removed, istanbul-diff could accept it not as a failure.
When multiple metrics are specified, e.g. both lines and functions, any loss of coverage in one of more of the metrics will fail the final return code.
A PR submitted to fix typo in istanbul-diff README Markdown doc, https://github.com/moos/istanbul-diff/pull/3
Jun 09, 2019: Initial and roughly tested with sample node.js repo.
]]>Following the roadmap, this is the 4th certificates on Coursera.org on the Machine Learning path.
Two big application areas are ready to commercialize Machine Learning with more powerful modern CPU or clouds, the computer visioning and NLP. Images and literal words are two main sources to extract features in our minds and so does the ML.
Course | Keywords | Completion Date | School |
---|---|---|---|
Machine Learning | Andrew Ng course as ML 101 | Completed by 2017-11-05 | Standford University |
Introduction to Data Science in Python | An intro to PyNum and Pandas in Data Science | 4 weeks, Completed by 2018-04-08 | University of Michigan |
Convolutional Neural Networks in TensorFlow | Applying CNN with Tensorflow and techniques avoiding overfitting and Transferred learning | 4 weeks, Completed by 2019-05-31 | deeplearning.ai/coursera |
Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning | TansorFlow and Typical ML technoiques and structures for Images | 4 weeks, Completed by 2019-05-04 | deeplearning.ai/coursera |
Course TBD, Machine Learing in NLP | Applying ML to NLP, chatbots | TODO: next step | TBD |
As on above table, the next bite will be NLP. Let's move up, buddies!
]]>Hurray! Completed the Deeplearning.ai course Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning and achieved the certificate on coursera!
]]>Check github, the way to add customized domain is to add a CNAME file with each domain in one line. If user tries to manaully configure his/her own domain on github settings tab, a CNAME file will be pegged automatically by github. However, the manually grown CNAME file will be purged in next posting time if hexo is not correctly configured.
Searching the hexo document, the place to hold this CNAME file is not local repo root folder but the root folder of hexo theme. In My case, it is ./themes/next-wuxubj-5.0.2/
. If your hexo applies other theme, please change to the corresponding folder name. This way, the CNAME file will be preserved.
(TBC)
It will request a Red Hat developer account to register to RHN when trying to enable rhel-7-server-devtools-rpms
RPM repo on Red Hat. However, if it is a docker environment, which is not required to register the docker instance to RHN to add this repo. So the repo could be enabled in Dockerfile. Then the LLDB toolset would be installed to this image.
# From a customized RHEL dotnet sdk base image |
(TODO): Push the image to dockerhub and launch more test with AWS environment.
]]>Complete this post with more details on how to apply LLDB on memory check and online debug.
These two certificates were achieved during the above project Evolve
.
Completed the 2nd data science course and achieved the certificate on coursera!
]]>After the Machine Learning course, I registered Data Science Introduction course (Michigan Univ.) to refresh Python hands. When the pop-up asking about updating anaconda navigator to a new version, I selected "yes" and it just quit current anaconda navigaotr window on Mac. However, Anaconda-Navigator only shut down without any updates. It might be due some permission issue.
Here is the command line to update Anaconda-Navigator: conda update anaconda-navigator
. To execute the correct command in cases of pyenv installed to wrap multiple python contexts, you need to select the anaconda pyenv profile and set it to local (or global if intent to).
To update anaconda current environment to the latest packages unless dependencies preserve some package versions, the command line is conda update conda
Another way to launch the terminal is to click Anaconda GUI environment column and select "Open Terminal" from the small triangle.