Hierarchical Position Embedding

Date Tags Paddle

预训练模型 预训练的三种embedding

word_embedding: [vocab_size, hidden_size]

position_embedding: [max_len, hidden_size]

token_type_embedding: [token_type_size, hidden_size]

embedding




paddle implements torch.repeat_interleave/K.repeat_elements using paddle.reshape & paddle.tile

https://pytorch.org/docs/stable/generated/torch.Tensor.repeat.html#torch.Tensor.repeat

https://pytorch.org/docs/stable/generated/torch.repeat_interleave.html

If the repeats is tensor([n1, n2, n3, …]), then the output will be tensor([0, 0, …, 1, 1, …, 2, 2, …, …]) where 0 appears n1 times, 1 appears n2 times, 2 appears n3 times, etc.

torch.repeat

torch.repeat_interleave


Mac m1 install Paddle

Date Tags Paddle

Mac m1系统mini-conda,环境安装paddle问题?

conda activate paddle
python -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple
Looking in indexes: https://mirror.baidu.com/pypi/simple
ERROR: Could not find a version that satisfies the requirement paddlepaddle
ERROR: No matching distribution found for paddlepaddle

https://www.paddlepaddle.org.cn/install/quick?docurl=/documentation/docs/zh/install/pip/macos-pip.html

https://github.com/PaddlePaddle/Paddle/issues/32377

macOS 版本 10.x/11.x (64 bit) (不支持GPU版本)

Python 版本 3.6/3.7/3.8/3.9 (64 bit)

pip 或 pip3 版本 20.2.2或更高版本 (64 bit)

CONDA_SUBDIR=osx-64 conda create -n paddle python==3.8.10 // create a new environment called pd_rosetta with intel packages.
conda activate paddle
python -c "import platform;print(platform.machine())"  // should be ‘x86_64’ not ‘arm64’
conda env config vars set CONDA_SUBDIR=osx-64 // # make sure that conda commands in this environment use intel packages
conda deactivate
conda activate paddle
echo "CONDA_SUBDIR: $CONDA_SUBDIR"
python -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple
                    more ...
                


Bert as Service

Date Tags NLP

git clone git@github.com:hanxiao/bert-as-service.git

https://bert-as-service.readthedocs.io/en/latest/section/get-start.html#start-the-bert-service-in-a-docker-container

docker build -t bert-as-service -f ./docker/Dockerfile .

docker run --runtime nvidia -itd -p 8022:5555 -p 8021:5556 -v /bert-as-service/server/model/:/model -t bert-as-service 1 128

usage: /usr/local/bin/bert-serving-start -http_port 8125 -num_worker=4 -max_seq_len=64 -max_batch_size=512 -model_dir /model
                 ARG   VALUE
__________________________________________________
           ckpt_name = bert_model.ckpt
         config_name = bert_config.json
                cors = *
                 cpu = False
          device_map = []
       do_lower_case = True
  fixed_embed_length = False
                fp16 = False
 gpu_memory_fraction = 0.5
       graph_tmp_dir = None
    http_max_connect = 10
           http_port = 8125
        mask_cls_sep = False
      max_batch_size = 512
         max_seq_len = 64
           model_dir = /model
no_position_embeddings = False
    no_special_token = False
          num_worker = 4
       pooling_layer = [-2]
    pooling_strategy = REDUCE_MEAN
                port = 5555
            port_out = 5556
       prefetch_size = 10
 priority_batch_size = 16
show_tokens_to_client = False
     tuned_model_dir = None
             verbose = False
                 xla = False
                    more ...