為了保證業(yè)務(wù)穩(wěn)定運(yùn)行,預(yù)測(cè)服務(wù)不健康風(fēng)險(xiǎn),日志的收集可以幫助我們很好的觀察當(dāng)前服務(wù)的健康狀況, 在傳統(tǒng)業(yè)務(wù)開發(fā)中,機(jī)器部署還不是很多時(shí),我們一般都是直接登錄服務(wù)器進(jìn)行日志查看、調(diào)試,但隨著業(yè)務(wù)的增大,服務(wù)的不斷拆分, 服務(wù)的維護(hù)成本也會(huì)隨之變得越來(lái)越復(fù)雜,在分布式系統(tǒng)中,服務(wù)器機(jī)子增多,服務(wù)分布在不同的服務(wù)器上,當(dāng)遇到問(wèn)題時(shí), 我們不能使用傳統(tǒng)做法,登錄到服務(wù)器進(jìn)行日志排查和調(diào)試,這個(gè)復(fù)雜度可想而知。
如果是一個(gè)簡(jiǎn)單的單體服務(wù)系統(tǒng)或者服務(wù)過(guò)于小不建議直接使用,否則會(huì)適得其反。
$ vim xx/filebeat.yaml
filebeat.inputs:
- type: log
enabled: true
# 開啟json解析
json.keys_under_root: true
json.add_error_key: true
# 日志文件路徑
paths:
- /var/log/order/*.log
setup.template.settings:
index.number_of_shards: 1
# 定義kafka topic field
fields:
log_topic: log-collection
# 輸出到kafka
output.kafka:
hosts: ["127.0.0.1:9092"]
topic: '%{[fields.log_topic]}'
partition.round_robin:
reachable_only: false
required_acks: 1
keep_alive: 10s
# ================================= Processors =================================
processors:
- decode_json_fields:
fields: ['@timestamp','level','content','trace','span','duration']
target: ""
xx為filebeat.yaml所在路徑
$ vim config.yaml
Clusters:
- Input:
Kafka:
Name: go-stash
Log:
Mode: file
Brokers:
- "127.0.0.1:9092"
Topics:
- log-collection
Group: stash
Conns: 3
Consumers: 10
Processors: 60
MinBytes: 1048576
MaxBytes: 10485760
Offset: first
Filters:
- Action: drop
Conditions:
- Key: status
Value: "503"
Type: contains
- Key: type
Value: "app"
Type: match
Op: and
- Action: remove_field
Fields:
- source
- _score
- "@metadata"
- agent
- ecs
- input
- log
- fields
Output:
ElasticSearch:
Hosts:
- "http://127.0.0.1:9200"
Index: "go-stash-{{yyyy.MM.dd}}"
MaxChunkBytes: 5242880
GracePeriod: 10s
Compress: false
TimeZone: UTC
進(jìn)入127.0.0.1:5601
這里僅演示收集服務(wù)中通過(guò)logx產(chǎn)生的日志,nginx中日志收集同理。
更多建議: