Dart默认的linter配置有点弱, 有很多有问题代码也不报错或警告. 添加一个analyzer可以应用dart的最佳代码实践, 对一些不好的代码风格提出警告或者直接报错, 从而提高代码质量.

Cf. resocoder的介绍文章 (和视频)

Use an analyzer

Resocoder推荐lint, 里面提供了一些最佳的dart和flutter代码风格. 类似的选择还有pandentic和effective_dart.

使用方式就是, 先在pubsepc.yaml里添加依赖:

# pubspec.yaml
dependency_overrides:
  lint: ^1.3.0

然后在根目录新建一个analysis_options.yaml文件:

# analysis_options.yaml
include: package:lint/analysis_options.yaml

Exclude folders / disable rules

analyzer默认会扫描文件夹下所有的dart代码, 有时候一些自动生成的代码或者在build/文件夹下的代码并不需要被分析, 可以用exclude把它们排除在外.

另外有些默认的规则可能不适用于自己的代码, 比如我就比较喜欢加this, 认为这样更容易区分成员变量和普通变量 ...

我的这个pelican博客已经有八年多了, 从最开始使用pelican2.x+bootstrap2主题到现在使用pelican3, 断断续续经过了各种折腾.

现在的blog虽然文件内容还很乱, 但是网页样式我基本满意. 问题就是pelican3加上我自定义的主题/插件, 配置起来实在太繁琐, 每次换电脑都要折腾半天...

所以之前我做了一个Dockerfile和GithubAction, 自动从markdown/rst文件生成静态网站的html文件: https://github.com/X-Wei/pelican-gh-actions-xwei

这个repo可以作为github action上运行, 比如每当commit到markdown文件夹的时候, 让github actor生成html文件然后commit —— 可以参考我的blog的github workflow配置.

今天又修改了一下, 让它也能在本地用docker跑, 免去了换电脑重新折腾配置的痛苦(这也是我这几个月没更新博客的原因, 之一). 这篇文章主要记录一下如何用Dockerfile在本地机器上预览或者生成静态网站.

build Docker image

首先需要使用Dockerfile来build一个docker image.
顺便给它加个tag叫"my-pelican-blog:latest" (-t my-pelican-blog:latest):

$ docker build -t ...

本文总结了Flutter Explained关于fvm的视频: https://youtu.be/R6vKde1vIGQ

background

flutter有很多channel: stable/beta/dev/master, 而目前有些功能只在某个channel上可用, 比如Linux support目前只在dev channel支持.

我日常用的是beta channel, 但是想写Linux程序时每次flutter channel dev的话 需要花费很多时间--因为要下载新channel的内容, switch回去的时候又要等半天(因为要从新下载).

fvm这正是解决这个问题的! 它可以cache不同版本的flutterSDK, 每个repo可以设置使用不同的版本号, 而且VSCode也只需要简单配置就能上手.

enable fvm

一行命令即可开启fvm: $ pub global activate fvm

常用法:

  • fvm flutter: Proxies flutter commands 用于选择合适的flutter版本
  • 也就是说用fvm flutter来代替flutter命令 ...

Overview

Background (Pre-Neural Machine Translation)

  • machine translation (MT): sentence from source lang to target lang.
  • 1950s: rule based, using bilingual dictionary.

1990s-2010s: Statistical MT (SMT)

using Bayes rule: P(y|x) = P(x|y)*P(y) / P(x)

⇒ The language model we already learnt in prev lectures ⇒ To get the ...

Vanishing Gradient Intuition and Proof

ex: grad of loss at position 4 w.r.t.hidden state at postion 1

with chain rule, the grad is smaller as it backprops

If the largest eigenvalue of Wh is less than 1, the gradient J_i/h_j will exponentially shrink.

Why Vanishing Gradient ...

Language Modeling

Language Modeling: task of predicting what words come next.

  • i.e.compute the conditional probability distribution

  • a language model can also be viewed as a system to give probability to a piece of text.

n-gram Language Models

n-gram Language Model: pre-deep learning solution for language modelling.

idea: Collect ...

Phrase structure: organize words into nested constituents.

Context-Free Grammars

context-free grammars (CFGs)

  • start with words, words are given a category (part of speech = POS):

  • words combine into phrases with categories like NP(noun phrase) and PP(prep.phrase):

  • Phrases can combine into bigger phrases recursively:

⇒ forms a tree structure:

Dependency ...

This week: neural net fundamentals

Classification Setup and Notation

training data:

softmax classifier

(linear classifier — hyperplane):

ith row of the param W: weight vector for class i to compute logits:

prediction = softmax of f_y:

cross-entropy

goal: for (x, y), maximize p(y|x) ⇒ loss for (x, y) = -log p(y ...

More Matrix Gradients

Deriving Gradients wrt Words

pitfall in tetraining word vectors: if some word is not in training data, but other synonyms are present ⇒ only the synonyms word vectors are moved

takeaway:

Backpropagation

backprop:

  • apply (generalized) chain rule
  • re-use shared stuff

computation graph

⇒ Go backwards along edges, pass along ...

More on Word2Vec

parameters θ : matrix U and V (each word vec is a row):

and the predictions don't take into account the distance between center word c and outside word o. ⇒ all word vecs predict high for the stopwords.

Optimization Basics

min loss function: J(θ)

gradient descent ...